Home
Jobs

689 Normalization Jobs - Page 11

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0.0 years

0 Lacs

Ahmedabad, Gujarat

On-site

Indeed logo

Ahmedabad Full Time Assistant Manager / Manager - Plant HR OBJECTIVE To support the Factory Operations by ensuring systematic implementation of HR strategy and processes in the factory to leverage human resources for higher people productivity, build high quality Teams, focus on cost optimization and harmonious industrial relations. LEGAL & STATUTORY COMPLIANCE To ensure 100% compliance to all the Statutes applicable to Factory and also with all work norms and work practices agreed in Long Term Settlement. Ensure 100% compliance to the Workplace Rights Policy (WRP) guidelines. WELFARE Take ownership in ensuring the welfare facilities such as Canteen, Restroom, Toilets and other requirements are provided as per the Factories Act, 1948 and maintain well in line with the Company philosophy. Ensure to drive equality in applicability of these facilities among all categories of employees including Contract Labour. Handling all statutory compliances /Audits I.e. Wages Act, Bonus Act, Gratuity Act, CLA etc., liasioning with Labour, Factory, Weight & Measuring, and Pollution authorities for day-to-day issues. Responsible/ Ensuring for all 02 manufacturing plants on time Statutory Compliances and semi-governmental agency compliance i.e. Factories Act, Employment Exchange Act, I D Act other Labour & Industrial Laws. Independent look after the audits Conducted by Various Department i.e (ESIC, PF, Factory, labour Department, Conducting statutory and social Audits of Units, Contractor’s Social and Statutory Compliances Audits etc. Drafting, vetting and pleading of Written Statements, replies of application, cases, agreements, application, deeds and other company docs as and when required etc. Preparing the legal cases MIS and represent before the management. Developing the HR plans and policies in conjunction with the company’s overall development plan i.e. adequate manpower deployment, Manpower succession Planning, Shop Floor Management Etc. Maintaining good internal communication within and outside the company. Introduced new different benefit plans for the employees like Medical, accidental and group gratuity scheme. Manpower optimization utilization and deployment in all plants, Cost, and General Administration of the Plants level. Examining the contractor’s bills and giving approval for final payments; Attendance System of both Permanent and Contractual employees and Salary Preparation of all the Permanent employees. Ensuring resolution of employee grievance and examining the grievances under grievance- redressal system. Supervising activities of Canteen, Rest Room, Ambulance and Dispensary etc. Complies the Welfare amenities to the employees under the schemes rule by Labour Welfare Board. Mediclaim Insurance and Accidental policy of all Employees. Maintain & Update the different types of registers/Forms required under various Labour Laws and Laisoning with Govt. authorities. Tracking of leave Records (Causal, Sick and Earned Leave), records of Salary Sheets, Salary Slips, Over Time, Compensatory Holidays, Incentives on monthly basis, Leave Encashment, Med claim etc. TALENT ACQUISITION Ensure timely hiring of 150 Employees for new Projects expansion Ensure timely Releasing offer letters / appointment letters. Facilitating joining formalities, conducting Induction / orientation program to management, staff, company & company policies. Planning of overall recruitment process within the approved recruitment budget. Analysis of Competency mapping & workmen category skill set. Managed various Campus Recruitment and initiating the DET Concept In the Organisation. Hired More than 300 DET Within the Region as per business Plan. PERFORMANCE MANAGEMENT SYSTEM Managing & Ensuring the Plant KRAs – Defining, Mid-Year review & Year End Review Proposing the Bell curve as a part of annual appraisal to the Management SDP (succession Development Plans) & its Execution & Validation of Hipots Oversee smooth implementation of HR policies for Manpower Planning, Recruitment, Induction& Orientation and Training & Development. Organizing training about new policies and new projects, Discipline, Self-motivation, leadership etc. training needs & annual training plan. Maintain good Liaison with Various Government of Gujarat Authorities & local bodies. Strengthen the Audits & Activities related to Shop Floor Management for healthy work environment & ensure the Safety, 5’S & Discipline, Ensured smooth shop floor running without any manpower shortage. Implementation of SAP and attendance/payroll software; Led zero accident and 100% safe environment at shop floor. Oversee Implementation of Kaizen activities at all the levels through total employee involvement, (Kaizen Competitions – Internal & External). Facilitating the Performance Management process including normalization and compliance in coordination with Business Heads and the Talent Management Team. Handling Performance Appraisal process & identifying scope for enhancement through the evaluation of KRA and summarization report submit to top management for approval. Prepare career growth plan for high potential employees based on competency mapping, PMS and making Succession Planning accordingly. Delivered 7% cut in manpower budget by time to time re calibrating HR processes. PLANT ADMINISTRATION Monitoring & controlling all administration related activities of plants including 5’S, Security, company’s transport, canteen, guest house etc. Led cost cutting in administration costs by 10% by re calibrating the administrative process. Institutionalized manpower planning and contractual manpower. Led project on plant productivity and HR MIS. implementation of SAP and attendance/payroll software Annual Budget Planning . Daily Monitoring of Manpower, Overtime, & Productivity to control the Manpower Cost as per Budget. Developing and keep alive various reports for management, which make it easy to make decisions regarding the current resources, planning, new projects, and so many other management decisions. Monthly MIS Generation, includes recruitment, attrition, training, PMS, Manpower cost, Department KRA & Regional KRA etc. OTHER INFORMATION Location of Position Factory and Corporate Location A-3/6/7, Swagat Industrial Park-1, Bakrol(Bujrang), AHMEDABAD

Posted 1 week ago

Apply

0 years

0 Lacs

Gurgaon, Haryana, India

Remote

Linkedin logo

About This Role Role Description The Private Markets Insight Data Services (PDS) team seeks an Investor Account Services Lead for India region. This individual will lead efforts around Private Markets data processing and providing high quality service to our clients, leveraging technology and automation to drive scale, alongside disciplined Operations best practices. Insight’s Managed Data Service is a key foundation to the growing Data & Analytics solutions delivered by the Insight business, and critical to maintain the growth of the Insight business. The team is responsible for the document retrieval, data extraction, normalization, and delivery for investors in private markets products including Private Equity, Private Credit, Real Estate, and Infrastructure. Key Responsibilities Lead a team focused on creation of the market leading Private Markets database and analytics ecosystem for Cashflow & Capital Account Statement Services Manage a team of data analysts and work with team leads to manage workload and priorities Support a business growing by >30% per annum by ensuring scalable growth of services including new document types, asset classes and beyond Actively participate in Digital transformation of the Business, including transformation of process and workflow to leverage Aladdin's patented data and document automation solution. Partner closely with Insight’s Client Success and Sales teams planning for continued service delivery, on time (client SLAs) and with quality as well as supporting RFPs Create an inclusive environment oriented around trust, open communication, creative thinking, and cohesive team effort A Leader who grows the next set of Leaders in the business, and ability to become a BlackRock Global Citizen Experience Required Bachelor or Master degree (preferably in Economics, Organizational Sciences, Mathematics, or related Accounting background) Demonstrated experience in running an end-to-end managed data service organization Demonstrated transformational leader with the ability and desire to influence the people, process, and technology. Experience in Financial Markets, preferably Private Markets, is preferred Ability to succinctly communicate KPI-driven progress to stakeholders and Senior Leadership Strong organizational and change management skill Excellent communication skills: Fluency in English, both written and verbal Our Benefits To help you stay energized, engaged and inspired, we offer a wide range of benefits including a strong retirement plan, tuition reimbursement, comprehensive healthcare, support for working parents and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about. Our hybrid work model BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock. About BlackRock At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress. This mission would not be possible without our smartest investment – the one we make in our employees. It’s why we’re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive. For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock | LinkedIn: www.linkedin.com/company/blackrock BlackRock is proud to be an Equal Opportunity Employer. We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law. Show more Show less

Posted 1 week ago

Apply

8.0 years

0 Lacs

Kochi, Kerala, India

On-site

Linkedin logo

Job Description We are looking for a seasoned Data Engineer with 5–8 years of experience, specializing in Microsoft Fabric for our UK Based client. The ideal candidate will play a key role in designing, building, and optimizing scalable data pipelines and models. You will work closely with analytics and business teams to drive data integration, ensure quality, and support data-driven decision-making in a modern cloud environment. Key Responsibilities: Design, develop, and optimize end-to-end data pipelines using Microsoft Fabric (Data Factory, Dataflows Gen2). Create and maintain data models, semantic models, and data marts for analytical and reporting purposes. Develop and manage SQL-based ETL processes, integrating various structured and unstructured data sources. Collaborate with BI developers and analysts to develop Power BI datasets, dashboards, and reports. Implement robust data integration solutions across diverse platforms and sources (on-premises, cloud). Ensure data integrity, quality, and governance through automated validation and error handling mechanisms. Work with business stakeholders to understand data requirements and translate them into technical specifications. Optimize data workflows for performance and cost-efficiency in a cloud-first architecture. Provide mentorship and technical guidance to junior data engineers. Required Skills: Strong hands-on experience with Microsoft Fabric, including Dataflows Gen2, Pipelines, and OneLake. Proficiency in Power BI, including building reports, dashboards, and working with semantic models. Solid understanding of data modeling techniques: star schema, snowflake, normalization/denormalization. Deep experience with SQL, stored procedures, and query optimization. Experience in data integration from diverse sources such as APIs, flat files, databases, and streaming data. Knowledge of data governance, lineage, and data catalog capabilities within the Microsoft ecosystem. Strong problem-solving skills and experience in performance tuning of large datasets. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Key Responsibilities JOB DESCRIPTION Develop, optimize, and maintain complex SQL queries, stored procedures, functions, and views. Analyze slow-performing queries and optimize execution plans to improve database performance. Design and implement indexing strategies to enhance query efficiency. Work with developers to optimize database interactions in applications. Develop and implement Teradata best practices for large-scale data processing and ETL workflows. Monitor and troubleshoot Teradata performance issues using tools like DBQL (Database Query Log), Viewpoint, and Explain Plan Analysis. Perform data modeling, normalization, and schema design improvements. Collaborate with teams to implement best practices for database tuning and performance enhancement. Automate repetitive database tasks using scripts and scheduled jobs. Document database architecture, queries, and optimization techniques. Responsibilities Required Skills & Qualifications: Strong proficiency in Teradata SQL, including query optimization techniques. Strong proficiency in SQL (T-SQL, PL/SQL, or equivalent). Experience with indexing strategies, partitioning, and caching techniques. Knowledge of database normalization, denormalization, and best practices. Familiarity with ETL processes, data warehousing, and large datasets. Experience in writing and optimizing stored procedures, triggers, and functions. Hands-on experience in Teradata performance tuning, indexing, partitioning, and statistics collection. Experience with EXPLAIN plans, DBQL analysis, and Teradata Viewpoint monitoring. Candidate should have PowerBI / Tableau integration experience - Good to Have About Us ABOUT US Bristlecone is the leading provider of AI-powered application transformation services for the connected supply chain. We empower our customers with speed, visibility, automation, and resiliency – to thrive on change. Our transformative solutions in Digital Logistics, Cognitive Manufacturing, Autonomous Planning, Smart Procurement and Digitalization are positioned around key industry pillars and delivered through a comprehensive portfolio of services spanning digital strategy, design and build, and implementation across a range of technology platforms. Bristlecone is ranked among the top ten leaders in supply chain services by Gartner. We are headquartered in San Jose, California, with locations across North America, Europe and Asia, and over 2,500 consultants. Bristlecone is part of the $19.4 billion Mahindra Group. Equal Opportunity Employer Bristlecone is an equal opportunity employer. All applicants will be considered for employment without attention to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status . Information Security Responsibilities Understand and adhere to Information Security policies, guidelines and procedure, practice them for protection of organizational data and Information System. Take part in information security training and act while handling information. Report all suspected security and policy breach to InfoSec team or appropriate authority (CISO). Understand and adhere to the additional information security responsibilities as part of the assigned job role. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

To understand the automation/data analytics requirement from business and coordinate with developers for technical / functional assistance by estimating the business value for the organization Technical Expertise To Have Create & design or structure the data architecture in standard environment database/data lake/cloud(GCP) with data security. Structure large set of data in to easy and scalable format Data integration and normalization on master data, including creation, updates, and deletion or cleansing. Develop technical codes to convert business logics into data wrangling algorithm & tabulation Find technical solution to resolve problem in analytical manner Transfer business data into real-time visualization Establish KPIs to measure the effectiveness of business decisions Establish visually enhanced dashboards with predictive analysis by produce real-time data refresh Provide continuous monitoring/maintenance for delivered products Follow organization standards, maintain, and track good documentation for daily transformations Collaborate with stakeholders, communicate regular updates Convert actions into value of delivery Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Responsibilities Job Description Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Qualifications 5+ Years exp in Database Engineering. Additional Information Perks Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

About Us: AppZen is the leader in autonomous spend-to-pay software. Its patented artificial intelligence accurately and efficiently processes information from thousands of data sources so that organizations can better understand enterprise spend at scale to make smarter business decisions. It seamlessly integrates with existing accounts payable, expense, and card workflows to read, understand, and make real-time decisions based on your unique spend profile, leading to faster processing times and fewer instances of fraud or wasteful spend. Global enterprises, including one-third of the Fortune 500, use AppZen’s invoice, expense, and card transaction solutions to replace manual finance processes and accelerate the speed and agility of their businesses. To learn more, visit us at www.appzen.com . About the Role: We are looking for a Senior Data Scientist to come and work on our growing AI stack. You will be working with a team of highly skilled and motivated data scientists and machine learning engineers. If you are excited about natural language understanding and machine translation, AppZen is the right place for you to apply and grow your skills. Must haves: Solid understanding of machine learning fundamentals, and familiar with standard algorithms and techniques. Expert knowledge of a statistical computing language such as Python , Knowledge of probability and statistics, including experimental design, predictive modeling, optimization, and causal inference. Lead the design, development, and implementation of state-of-the-art NLP algorithm s and models using Transformers and similar architectures. Good Understanding of MLOps tools/processes like ElasticSearch, Jenkins, Docker is a plus. Good knowledge of Deep Learning frameworks like PyTorch , Tensorflow is a must. Ensure data quality throughout all stages of acquisition and processing, including such areas as data sourcing/collection, ground truth generation, normalization, transformation, cross-lingual alignment/mapping, etc. Manage your own process: identify and execute on high impact projects, triage external requests, and make sure you bring projects to conclusion in time for the results to be useful. Excellent written and verbal technical communication skills; communicate proposals and results in a clear manner backed by data and coupled with actionable conclusions to drive business decisions. M.Tech/B.Tech. or equivalent experience in Computer Science, Engineering, Statistics, or other relevant technical field. Must have 4+ years of industry experience. You are a team player. Nice-to-Have: Track-record of having developed novel algorithms, e.g. publications in one or more of the following: KDD, WWW, NIPS, ISWC, NAACL, ACL, SIGIR, EMNLP, ICML etc. Expertise in building and fine-tuning LLM models using Transformers and RAG systems. Show more Show less

Posted 1 week ago

Apply

2.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Description At Amazon, we strive to be Earth's most customer-centric company, where customers can find and discover anything they want to buy online. Our mission in International Seller Services (ISS) is to provide technology solutions for improving the seller and customer experience, drive seller compliance, maximize seller success, and improve internal workforce productivity. Team's main focus is to build products that are scalable across different regions of the world, while working in partnership with ISS regional stakeholders and multiple partner teams across Amazon. As a Data Scientist, you will be responsible for modeling complex problems, discovering insights, and building risk algorithms that identify opportunities through statistical models, machine learning, and visualization techniques to improve operational efficiency. As a Data Scientist, you will leverage your expertise in Machine Learning, Natural Language Processing (NLP), and Large Language Models (LLM) to develop innovative solutions for Amazon's ISS team. You'll be responsible for modeling complex problems, building innovative algorithms, and discovering actionable insights through statistical models and visualization techniques to enhance operational efficiency in the e-commerce space. The role combines usage of latest AI technology with practical business applications, requiring someone passionate about transforming the way we interact with technology while delivering measurable impact through advanced analytics and machine learning solutions. You will need to collaborate effectively with business and product leaders within ISS and cross-functional teams to build scalable solutions against high organizational standards. The candidate should be able to apply a breadth of tools, data sources, and Data Science techniques to answer a wide range of high-impact business questions and proactively present new insights in concise and effective manner. The candidate should be an effective communicator capable of independently driving issues to resolution and communicating insights to non-technical audiences. This is a high impact role with goals that directly impacts the bottom line of the business. Responsibilities: - Analyze terabytes of data to define and deliver on complex analytical deep dives to unlock insights and build scalable solutions through Data Science to ensure security of Amazon’s platform and transactions Build Machine Learning and/or statistical models that evaluate the transaction legitimacy and track impact over time Ensure data quality throughout all stages of acquisition and processing, including data sourcing/collection, ground truth generation, normalization, transformation, and cross-lingual alignment/mapping Define and conduct experiments to validate/reject hypotheses, and communicate insights and recommendations to Product and Tech teams Develop efficient data querying infrastructure for both offline and online use cases Collaborate with cross-functional teams from multidisciplinary science, engineering and business backgrounds to enhance current automation processes Learn and understand a broad range of Amazon’s data resources and know when, how, and which to use and which not to use. Maintain technical document and communicate results to diverse audiences with effective writing, visualizations, and presentations Basic Qualifications 2+ years of data scientist experience 3+ years of data querying languages (e.g. SQL), scripting languages (e.g. Python) or statistical/mathematical software (e.g. R, SAS, Matlab, etc.) experience 3+ years of machine learning/statistical modeling data analysis tools and techniques, and parameters that affect their performance experience Experience applying theoretical models in an applied environment Preferred Qualifications Experience in Python, Perl, or another scripting language Experience in a ML or data scientist role with a large technology company Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A2943984 Show more Show less

Posted 1 week ago

Apply

0 years

4 - 4 Lacs

Coimbatore

On-site

Flex is the diversified manufacturing partner of choice that helps market-leading brands design, build and deliver innovative products that improve the world. A career at Flex offers the opportunity to make a difference and invest in your growth in a respectful, inclusive, and collaborative environment. If you are excited about a role but don't meet every bullet point, we encourage you to apply and join us to create the extraordinary. Job Summary To support our extraordinary teams who build great products and contribute to our growth, we’re looking to add a Specialist – GBS Planning in Coimbatore What a typical day looks like: Responsible for providing expertise and support to the Customer Focus Team (CFT), ensuring the ability of the materials planning for a specific project or projects as required. Providing materials support to the weekly production planned orders and enables to achieve Kit on time drop to meet Customer Schedule. Key assignments include providing timely Materials status through use of available Shortage reports, Submission of Excess and Obsolete Inventory to the Customer, Work Order Management, inventory management, MRB and DR Management to achieve the operating goals. Senior Materials Planners for New Emerging NPI Accounts to provide faster service to the NPI Customer to effectively communicate with the customer protecting Business interest of Flex Working on customer forecast for activity like normalization, forecast comparison etc. Working on customer forecast & shipment using waterfall method. Responsible for analyzing availability of materials & capacity based on customer demand & coming up with aggressive but achievable loading schedule. Responsible for running weekly system reports to determine material shortages & work on their closure with buying team. Responsible for handling work order management based on build plan. Responsible for identifying & taking various inventory management measures. The experience we’re looking to add to our team: Education: Bachelor's Degree or Engineering Graduates Experience: 3- 5 yrs. Planning/ Supply Chain. Must be able to analyze supply chain for demand pull in or push out. Mandatory Knowledge of computer software applications, MS Excel, Word & PowerPoint (PF) Knowledge of planning tool like Kinaxis will be an added advantage Proficiency: ERP/P2P systems BAAN / SAP/ Oracle, Kinaxis knowledge will be added advantage. Knowledge of Engineering BOMs, product structure, EOL, ECO Management Process Knowledge: complete planning cycle including MPS, MRP, Demand Planning, Materials planning, Production planning. Communication: Communication, both verbal and written, is an important part of this role. The job holder is required to exchange information, ideas and views on business related matters concerning the Planning function, throughout the Company at all levels What you’ll receive for the great work you provide Health Insurance PTO PV14 Job Category Global Procurement & Supply Chain Required Skills: Optional Skills: Flex pays for all costs associated with the application, interview or offer process, a candidate will not be asked for any payment related to these costs. Flex is an Equal Opportunity Employer and employment selection decisions are based on merit, qualifications, and abilities. We do not discriminate based on: age, race, religion, color, sex, national origin, marital status, sexual orientation, gender identity, veteran status, disability, pregnancy status, or any other status protected by law. We're happy to provide reasonable accommodations to those with a disability for assistance in the application process. Please email accessibility@flex.com and we'll discuss your specific situation and next steps (NOTE: this email does not accept or consider resumes or applications. This is only for disability assistance. To be considered for a position at Flex, you must complete the application process first).

Posted 1 week ago

Apply

2.0 years

2 - 4 Lacs

Vadodara

On-site

Are you passionate about data, performance tuning, and writing efficient SQL? Join our growing team where you’ll work on exciting projects and contribute to maintaining high-performance, scalable database systems. -What we’re looking for: -Strong SQL skills - Experience with SQL Server / PostgreSQL / MySQL. -Understanding of normalization, indexing, and query optimization. -Advance query writing skill. - Knowledge of database backup, recovery & security - Basic Linux/Unix scripting (a plus) -Exposure to cloud platforms like AWS RDS or Google Cloud SQL (bonus!) -Location: Vadodara -Apply here: khushirai@blueboxinfosoft.comLet’s build smarter systems together! Job Type: Full-time Pay: ₹200,000.00 - ₹400,000.00 per year Benefits: Paid sick time Schedule: Day shift Monday to Friday Education: Bachelor's (Preferred) Experience: Database development: 2 years (Preferred) Location: Vadodara, Gujarat (Required) Work Location: In person

Posted 1 week ago

Apply

8.0 years

0 Lacs

Andhra Pradesh, India

On-site

Linkedin logo

Data Migration Architect More than 8 years of experience with data architecture, large-scale data modelling, database design, and business requirements analysis Data Migration expert and it is must skills, - Work along with the Senior Data Lead / Architect to develop the Migration Framework / scripts Responsible for overall data architecture for all areas and domains of the enterprise, including data acquisition, ODS, data warehouse, data provisioning, and ETL Gather and analyse business data requirements and model these needs Expert level understanding of relational database concepts, dimensional database concepts and database architecture and design, ontology, and taxonomy design Experience with using CA Erwin to develop Enterprise Data Models Set standards for data management, analyse current state and conceive desired future state, and conceive projects needed to close the gap between current state and future goals Strong understanding of the best practices in data modelling techniques including in-depth knowledge of the various normalization, dimensional modelling and their appropriate usage in various solutions Provide guidance on technical specifications, data modelling, and reviews proposed data marts, data models, and other dimensional uses of data within the Data Warehouse The Data Architect will oversee the mapping of data sources, data movement, interfaces, and analytics, with the goal of ensuring data quality Experience and Knowledge of Talend Data Integration Platform Analyse the account structure, contact, pricing and other related objects and to make sure the required data is moved from source system(s) ( Innovative or cForce) to iQuest Map data attribute(s) and create mapping documents as required Create / Write ETL (Extract transform Load) to read data from source and load data to destination - Data Migration Cleanse ( De-dup , etc) and write transformation logic for data transformation Develop error handling strategy to handle exception / missing values along with the data lead and incorporate them into the scripts Develop roll back mechanism for rinse-and -repeat activities Assist in QA and UAT activities Liaison with the Change management teams as required. Assist in QA and UAT activities Show more Show less

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

Job Overview We are seeking a skilled Data Engineer to join our team. The successful candidate will be responsible for maintaining and optimizing data pipelines, implementing robust data checks, and ensuring the accuracy and integrity of data flows. This role is critical in supporting data-driven decision-making processes, especially in the context of our insurance-focused business operations. Key Responsibilities Data Collection and Acquisition: Source Identification, Data Licensing and Compliance, Data Crawling/Collection Data Preprocessing and Cleaning: Data Cleaning, Text Tokenization, Normalization, Noise Filtering Data Transformation and Feature Engineering: Text Embedding, Text Augmentation, Handling Multilingual Data Data Pipeline Development: Scalable Pipelines, ETL Processes, Automation Data Storage and Management: Data Warehousing, Database Optimization, Version Control Collaboration with Data Scientists and ML Engineers: Data Accessibility, Support for Model Development, Data Quality Assurance Performance Optimization and Scaling: Efficient Data Handling, Distributed Computing Data Security and Privacy: Data Anonymization, Compliance with Regulations Documentation and Reporting: Data Pipeline Documentation, Reporting Candidate Profile 6 -10 years of relevant experience in data engineering tools Tools: Data Processing & Storage: Apache Spark, Apache Hadoop, Apache Kafka, Google BigQuery, AWS S3, Databricks Machine Learning Frameworks: TensorFlow, PyTorch, Hugging Face Transformers, scikit-learn Data Pipelines & Automation: Apache Airflow, Kubeflow, Luigi Version Control & Collaboration: Git, DVC (Data Version Control) Data Extraction: BeautifulSoup, Scrapy, APIs (RESTful, GraphQL) What We Offer EXL Analytics offers an exciting, fast paced and innovative environment, which brings together a group of sharp and entrepreneurial professionals who are eager to influence business decisions. From your very first day, you get an opportunity to work closely with highly experienced, world class analytics consultants. You can expect to learn many aspects of businesses that our clients engage in. You will also learn effective teamwork and time-management skills - key aspects for personal and professional growth Analytics requires different skill sets at different levels within the organization. At EXL Analytics, we invest heavily in training you in all aspects of analytics as well as in leading analytical tools and techniques. We provide guidance/ coaching to every employee through our mentoring program wherein every junior level employee is assigned a senior level professional as advisors. Sky is the limit for our team members. The unique experiences gathered at EXL Analytics sets the stage for further growth and development in our company and beyond Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Mulshi, Maharashtra, India

On-site

Linkedin logo

Apply Before:20/05/2025 Position: PHP Developer/Laravel Location: Dange Chowk, Thergaon, Pune, Employment Type: Full Time CTC: As per company norms. Job Description (in Brief) We are hiring PHP Developers with 0 to 6 months of experience who are proficient in PHP, WordPress, Laravel, and CodeIgniter to develop websites and web applications. The ideal candidate will participate in the complete software development lifecycle—from requirements analysis to testing—and must be comfortable working in a team or independently. Technical Key Skills Strong expertise in MVC, PHP Frameworks (Laravel, CodeIgniter) Proficiency in jQuery, AJAX, Bootstrap Knowledge in HTML5, CSS3, JavaScript, SQL Server, WordPress, MySQL Hands-on experience with Core PHP, APIs, AJAX Familiarity with tools like Jira, Trello, Basecamp, Click, Bug Herd Proficient with Git and RESTful APIs Knowledge of Linux, LAMP/WAMP, and server management Experience with database optimization and query performance Preferred: Exposure to open-source platforms like WordPress and Shopify Desired Competencies Strong understanding of OOPS concepts Ability to design and maintain scalable applications Experience in database design, normalization, and RDBMS Good communication (written & spoken) and logical reasoning skills Ability to work independently and as part of a team Roles & Responsibilities Develop and maintain web-based applications using open-source tools Manage Laravel app servers for performance and uptime Integrate third-party APIs and services Participate in QA processes including code reviews and testing Debug server and performance issues Handle project lifecycles independently—from design to delivery Maintain technical documentation and support tools Design scalable applications and database structures Ensure on-time project delivery and updates Suggest improvements for development tools and practices Collaborate with team members to solve complex problems We are no longer accepting applications for this ad. Contact us for more details. Share: Admin@Radicals Previous post Job Opening for Mern stack Developer (Job Code RT 1435). June 5, 2025 Next post Job Opening for ASP.NET Developer (Job Code RT 1437). June 5, 2025 Show more Show less

Posted 1 week ago

Apply

0.0 - 2.0 years

0 Lacs

Vadodara, Gujarat

On-site

Indeed logo

Are you passionate about data, performance tuning, and writing efficient SQL? Join our growing team where you’ll work on exciting projects and contribute to maintaining high-performance, scalable database systems. -What we’re looking for: -Strong SQL skills - Experience with SQL Server / PostgreSQL / MySQL. -Understanding of normalization, indexing, and query optimization. -Advance query writing skill. - Knowledge of database backup, recovery & security - Basic Linux/Unix scripting (a plus) -Exposure to cloud platforms like AWS RDS or Google Cloud SQL (bonus!) -Location: Vadodara -Apply here: khushirai@blueboxinfosoft.comLet’s build smarter systems together! Job Type: Full-time Pay: ₹200,000.00 - ₹400,000.00 per year Benefits: Paid sick time Schedule: Day shift Monday to Friday Education: Bachelor's (Preferred) Experience: Database development: 2 years (Preferred) Location: Vadodara, Gujarat (Required) Work Location: In person

Posted 1 week ago

Apply

0 years

0 Lacs

India

On-site

Linkedin logo

We are looking to hire a Data or Business Analyst to join our data team. You will take responsibility for managing our master data set, developing reports, and troubleshooting data issues. To do well in this role you need a very fine eye for detail, experience as a data analyst, and a deep understanding of the popular data analysis tools and databases. Responsibilities :Managing master data, including creation, updates, and deletion .Managing users and user roles .Provide quality assurance of imported data, working with quality assurance analysts if necessary .Commissioning and decommissioning of data sets .Processing confidential data and information according to guidelines .Helping develops reports and analyses .Managing and designing the reporting environment, including data sources, security, and metadata .Supporting the data warehouse in identifying and revising reporting requirements .Supporting initiatives for data integrity and normalization .Assessing tests and implementing new or upgraded software and assisting with strategic decisions on new systems .Generating reports from single or multiple systems .Troubleshooting the reporting database environment and reports .Evaluating changes and updates to source production systems .Training end-users on new reports and dashboards .Providing technical expertise in data storage structures, data mining, and data cleansing . Requirement s:Bachelor’s degree from an accredited university or college in computer scienc e.Work experience as a Data or Business Analyst or in a related fiel d.Ability to work with stakeholders to assess potential risk s.Ability to analyze existing tools and databases and provide software solution recommendation s.Ability to translate business requirements into nontechnical, lay term s.High-level experience in methodologies and processes for managing large-scale database s.Demonstrated experience in handling large data sets and relational database s.Understanding of addressing and metadata standard s.High-level written and verbal communication skill s. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Job Title:- ServiceNow Developer Location:- Noida Sector 90 Duration: Full-time Permanent Job Summary We are seeking a highly skilled ServiceNow professional with deep expertise in Hardware Asset Management (HAM) , Software Asset Management (SAM) , and Configuration Management Database (CMDB) . The ideal candidate will play a key role in designing, implementing, and optimizing asset and configuration management solutions on the ServiceNow platform. This role requires both strong technical acumen and functional understanding of IT asset lifecycle and configuration management best practices. Key Responsibilities Design and configure ServiceNow modules including HAM, SAM, and CMDB to align with business goals and ITIL processes. Implement best practices for asset discovery, normalization, license compliance, and reconciliation using ServiceNow Discovery and IntegrationHub . Ensure CMDB data integrity and health through effective class models, data normalization, and relationship mapping. Define asset lifecycle workflows for hardware and software, from procurement to retirement. Integrate ServiceNow with third-party systems (e.g., SCCM, JAMF, Tanium, Flexera, AWS, Azure) for accurate asset and configuration data ingestion. Lead workshops with stakeholders to gather requirements and translate them into technical solutions. Establish and enforce governance, data quality, and reconciliation policies for CMDB and Asset Management. Collaborate with ITSM, ITOM, Security Ops, and Procurement teams to ensure data alignment across the platform. Mentor junior developers and provide technical oversight for asset and CMDB-related enhancements. Drive the roadmap for HAM/SAM/CMDB capabilities in alignment with ServiceNow's latest releases. Required Skills & Experience 5+ years of hands-on experience in ServiceNow with focus on HAM, SAM, and CMDB . Deep knowledge of ServiceNow Discovery , Asset Management Lifecycle , Software License Management , and CMDB design principles . Proficiency in JavaScript , Glide API , Flow Designer , and REST/SOAP integrations . Experience implementing ServiceNow SAM Professional and managing vendor software models, entitlements, and compliance. Familiarity with data ingestion sources and normalization techniques using ILMT , SCCM , BigFix , etc. Understanding of ITIL v3/v4 framework, especially around Asset, Configuration, and Change Management. Strong analytical and problem-solving skills, with attention to detail. Excellent communication and stakeholder management skills. Certifications- Would be great – Not Mandatory ServiceNow Certified System Administrator ServiceNow Certified Implementation Specialist – HAM / SAM / CMDB / Discovery ITIL v3 or v4 Foundation Certification ServiceNow Certified Technical Architect (a plus) Work on enterprise-scale ServiceNow implementations. Join a high-performing, collaborative ITSM/ITAM team. Opportunity to lead digital transformation initiatives using ServiceNow’s latest technologies. Flexible working environment and continuous learning culture. Show more Show less

Posted 1 week ago

Apply

10.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Job Description – PostgreSQL DBA We at Pine Labs are looking for those who share our core belief - “Every Day is Game day”. We bring our best selves to work each day to realize our mission of enriching the world through the power of digital commerce and financial services. Role Purpose We are looking for a skilled PostgreSQL Database Administrator to join our dynamic IT team. The ideal candidate will be responsible for the installation, configuration, maintenance, and performance optimization of PostgreSQL databases including designing and implementing database schemas; managing security and access. He will handle backup and recovery, troubleshoot issues, and may participate in database migration projects. If you have a passion for technology, a commitment to excellence, and the skills and experience required to excel as a PostgreSQL Database Administrator, we look forward to receiving your application and discussing how you can contribute to our team's success. Responsibilities we entrust you with Database Setup and Configuration: Installing PostgreSQL, configuring instances, setting up client authentication, and managing connection parameters. Database Design and Schema Management: Designing database schemas, tables, and relationships, implementing indexing strategies, and ensuring normalization. Security and Access Control: Implementing security measures, managing user roles and permissions, and auditing database activity. Performance Monitoring and Optimization: Monitoring database performance, tuning queries, optimizing storage, and identifying performance bottlenecks. Backup and Recovery: Implementing backup and recovery plans, ensuring data integrity and availability in case of failure. Troubleshooting and Support: Investigating and resolving database issues, providing support to users and developers, and documenting troubleshooting procedures. Migration Projects: Participating in database migration projects, including migrating from other database systems to PostgreSQL. Automation and Scripting: Automating tasks using scripting languages to streamline administrative processes. Documentation: Maintaining documentation for database configurations, procedures, and troubleshooting steps. Server sizing: Estimating the service configuration based on data volume and performance requirements What matters in this role Relevant work experience 10+ years of experience in relevant field BE/BTech in computer science or technology PostgreSQL Expertise: Proficiency in PostgreSQL syntax, features, and administration tools. SQL: Strong understanding of SQL, including query optimization and data manipulation. Database Design Principles: Knowledge of database normalization, indexing, and other design principles. Operating Systems: Understanding of operating system concepts related to database performance and stability. Security: Knowledge of database security practices and authentication mechanisms. Scripting Languages: Familiarity with scripting languages like Python or Bash for automation. Troubleshooting: Strong analytical and problem-solving skills to diagnose and resolve database issues. Communication: Ability to communicate effectively with developers, users, and other team members. Ability to work independently and collaboratively within a team. Experience with Linux operating systems and scripting languages. Knowledge of PostgreSQL on AWS and OnPremises is mandatory What we Value in Our people You take the shot: You Decide Fast and You Deliver Right You are the CEO of what you do: you show ownership and make things happen You own tomorrow: by building solutions for the merchants and doing the right thing You sign your work like an artist: You seek to learn and take pride in the work you do Show more Show less

Posted 1 week ago

Apply

0.0 - 6.0 years

0 Lacs

Navi Mumbai, Maharashtra

On-site

Indeed logo

MS-BankingNavi Mumbai Posted On 05 Jun 2025 End Date 04 Aug 2025 Required Experience 3 - 6 Years Basic Section No. Of Openings 1 Designation Senior Test Engineer Closing Date 04 Aug 2025 Organisational MainBU EQPM Sub BU MS-Banking ParentCC COGS CostCenter COGS Legal Entity QualityKiosk Technologies Private Limited Legal Entity Location Navi Mumbai Country India Region India State Maharashtra City Navi Mumbai Working Location Mahape Client Location NA Skills Skill ETL DATABASE TESTING BANKING REGULATORY REPORTING Highest Education OTHER CERTIFICATION No data available Working Language ENGLISH JOB DESCRIPTION Hi Team, We have an excellent opportunity for ETL Database Tester with Banking Domain Role: Senior ETL Database Tester Yrs of exp: 3.3 - 6 years Location: Navi Mumbai ONLY IMMEDIATE JOINERS PREFERRED Requirement: Mandatory experience : Banking, Regulatory Report Testing. ETL & Database Testing. • Knowledge on designing complex SQL queries • Knowledge on Data Warehouse Domain skills in Banking 1. Indexing 2. Normalization 3. Handling High level data experience 4. Regulatory Report Testing Flexible to travel client location. Should be able to identify Risks and highlight when require

Posted 1 week ago

Apply

0.0 - 5.0 years

0 Lacs

Pune, Maharashtra

On-site

Indeed logo

Required Skills and Qualifications: Bachelor’s degree in Computer Science, Information Technology, or a related field (or equivalent work experience). Experience: 5 years of experience working with SQL Server (versions 2016, 2017, 2019, or later). Strong hands-on experience with T-SQL, stored procedures, triggers, and functions. Experience with SQL Server Management Studio (SSMS), SQL Profiler, and other SQL Server tools. Experience in database design, normalization, and indexing strategies. Technical Skills: Strong proficiency in SQL query optimization and performance tuning. Familiarity with SQL Server Reporting Services (SSRS) and SQL Server Integration Services (SSIS) is a plus. Understanding of database security, backup, and recovery techniques. Knowledge of database clustering, replication, and high-availability solutions (AlwaysOn, mirroring, etc.). Soft Skills: Strong problem-solving and analytical abilities. Excellent communication skills for working with cross-functional teams. Detail-oriented, with an ability to work independently and manage multiple tasks. Requirements Requirements Required Skills and Qualifications: Bachelor’s degree in Computer Science, Information Technology, or a related field (or equivalent work experience). Experience: 3+ years of experience working with SQL Server (versions 2016, 2017, 2019, or later). Strong hands-on experience with T-SQL, stored procedures, triggers, and functions. Experience with SQL Server Management Studio (SSMS), SQL Profiler, and other SQL Server tools. Experience in database design, normalization, and indexing strategies. Technical Skills: Strong proficiency in SQL query optimization and performance tuning. Familiarity with SQL Server Reporting Services (SSRS) and SQL Server Integration Services (SSIS) is a plus. Understanding of database security, backup, and recovery techniques. Knowledge of database clustering, replication, and high-availability solutions (AlwaysOn, mirroring, etc.). Soft Skills: Strong problem-solving and analytical abilities. Excellent communication skills for working with cross-functional teams. Detail-oriented, with an ability to work independently and manage multiple tasks. Job Opening ID RRF_5378 Job Type Permanent Industry IT Services Date Opened 05/06/2025 City Pune Province Maharashtra Country India Postal Code 411057

Posted 1 week ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Description: Role & responsibilities We are looking for a skilled SQL Developer with hands-on experience in data reconciliation and strong analytical skills. The ideal candidate will be responsible for writing complex SQL queries, maintaining database structures, and ensuring data accuracy by implementing reconciliation logic across systems. Required Skills: 3+ years of experience in SQL development and database management (e.g., MS SQL Server, Oracle, MySQL). Proven experience in data reconciliation projects or financial data matching. Strong understanding of data structures, normalization, and relational databases. Proficient in writing complex joins, subqueries, and performance-tuned SQL. Experience working with large datasets in structured environments. Familiarity with tools like Excel, Python (for data comparison), or any ETL tools is a plus. Good communication and documentation skills. Education: Bachelors degree in Computer Science, Information Technology, or a related field. Show more Show less

Posted 1 week ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

As a Sr.Data Quality Engineer manages and coordinates with internal or external parties the collection, compilation, normalization, and standard analysis of data assets across diverse projects and data platforms. Develops, maintains, evaluates, and tests valuates data solutions within an organization. Develops and executes plans, policies, and practices that control, protect, deliver, and enhance the value and integrity of the organization's data and information assets and programs. Typical Functions Manipulates and queries data by building SQL and stored procedures single handedly in Snowflake Writes and executes test cases for data-related release items within Agile processes Writes complex SQL queries and handle large data set quality validation Builds a Data Quality Testing Framework from scratch to monitor the quality of data. Understands the basic concepts of data Warehousing, ETL to validate changes to them Executes common data quality tests to validate data accuracy, data completeness, data freshness, data integrity, data consistency Builds test cases to validate the generated data analytical report and dashboard Creates database tests to enforce data validation and constraints quality standards Applies your knowledge of dimensional modeling and data warehouse concepts, such as star schemas, snowflakes, dimensions, facts to conduct data analysis Performs statistical tests on large datasets to determine data quality and integrity Evaluates system performance and design, as well as its effect on data quality Collaborates with database developers to improve data collection and storage processes Runs data queries to identify coding issues and data exceptions, as well as cleaning data Gathers data from primary or secondary data sources to identify and interpret trends Keeps abreast of developments and trends in data quality analysis Collects, stores, processes, and analyses raw and/or complex data from multiple sources, recommends ways to apply the data, chooses and designs optimal data solutions, and builds data processing systems, using expertise in data warehousing solutions and working with the latest database technologies. Maintains, implements, and monitors the quality of the data and information with the architecture used across the company; reports on results and identifies and recommends system application changes required to improve the quality of data in all applications. Investigates data quality problems, conducts analysis to determine root causes of problems, corrects errors, and develops prototypes, process improvement plans across all programs, and proof of concepts for the selected solutions. Processes unstructured data into a form suitable for analysis, followed by doing the analysis. Integrates innovations and algorithms into the organization's data systems, working closely with engineering team. Implements complex data projects with a focus on collecting, parsing, managing, analysing, and visualising large sets of data to turn information into insights using multiple platforms. Serves as a data subject matter expert, collaborates with business owners or external clients to establish an analysis plan to answer key business questions, and delivers both reporting results and insights. Generates specific and operational reports in support of objectives and initiatives, and presents and communicates complex analytical data and results to appropriate audiences. Requirements Other duties or functions may be assigned. Bachelor's Degree in Engineering, Computer Science, or equivalent experience 6+ Years of Software Quality Assurance experience which includes Data Warehouse Testing Proficiency in SQL ideally within Snowflake SQL with MySQL, SQL Server, and/or PostgreSQL Experience with Sisense or Power BI 5+ years experience developing data driven software Proficiency in programming languages, including Structured Query Language (SQL) Experience writing DBT tests Experience In Agile Methods, Particularly Scrum, Preferred Demonstrated knowledge of critical thinking Professional experience around the data science lifecycle (feature engineering, training, model deployment) Show more Show less

Posted 1 week ago

Apply

15.0 years

0 Lacs

Hyderābād

Remote

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Data Engineering Good to have skills : NA Minimum 12 year(s) of experience is required Educational Qualification : 15 years full time education Summary: seeking a hands-on Senior Engineering Manager of Data Platform to spearhead the development of capabilities that power Vertex products while providing a connected experience for our customers. This role demands a deep engineering background with hands-on experience in building and scaling production-level systems. The ideal candidate will excel in leading teams to deliver high-quality data products and will provide mentorship, guidance, and leadership. In this role, you will work to increase the domain data coverage and adoption of the Data Platform by promoting a connected user experience through data. You will increase data literacy and trust by leading our Data Governance and Master Data Management initiatives. You will contribute to the vision and roadmap of self-serve capabilities through the Data Platform. Roles & Responsibilities: • Be hands-on in leading the development of features that enhance self-service capabilities of our data platform, ensuring the platform is scalable, reliable, and fully aligned with business objectives, and defining and implementing best practices in data architecture, data modeling, and data governance. • Work closely with Product, Engineering, and other departments to ensure the data platform meets business requirements. • Influence cross-functional initiatives related to data tools, governance, and cross-domain data sharing. Ensure technical designs are thoroughly evaluated and aligned with business objectives. • Determine appropriate recruiting of staff to achieve goals and objectives. Interview, recruit, develop and retain top talent. • Manage and mentor a team of engineers, fostering a collaborative and high-performance culture, and encouraging a growth mindset and accountability for outcomes. Interpret how the business strategy links to individual roles and responsibilities. • Provide career development opportunities and establish processes and practices for knowledge sharing and communication. • Partner with external vendors to address issues, and technical challenges. • Stay current with emerging technologies and industry trends in field to ensure the platform remains cutting-edge. Professional & Technical Skills: • 12+ years of hands-on experience in software development (preferably in the data space), with 3+ years of people management experience, demonstrating success in building, growing, and managing multiple teams. • Extensive experience in architecting and building complex data platforms and products. In-depth knowledge of cloud-based services and data tools such as Snowflake, AWS, Azure, with expertise in data ingestion, normalization, and modeling. • Strong experience in building and scaling production-level cloud-based data systems utilizing data ingestion tools like Fivetran, Data Quality and Observability tools like Monte Carlo, Data Catalog like Atlan and Master Data tools like Reltio or Informatica. • Thorough understanding of best practices regarding agile software development and software testing. • Experience of deploying cloud-based applications using automated CI/CD processes and container technologies. • Understanding of security best practices when architecting SaaS applications on cloud Infrastructure. • Ability to understand complex business systems and a willingness to learn and apply new technologies as needed. • Proven ability to influence and deliver high-impact initiatives. Forward-thinking mindset with the ability to define and drive the team’s mission, vision, and long-term strategies. • Excellent leadership skills with a track record of managing teams and collaborating effectively across departments. Strong written and communication skills. • Proven ability to work with and lead remote teams to achieve sustainable long-term success. • “Work together” and “Get Stuff Done” attitude without losing sight of quality, and a sense of responsibility to customers and the team. Additional Information: - The candidate should have a minimum of 12 years of experience in Data Engineering. - This position is based at our Hyderabad office. - A 15 years full-time education is required. 15 years full time education

Posted 1 week ago

Apply

4.0 years

6 - 9 Lacs

Hyderābād

On-site

About Citco Citco is a global leader in fund services, corporate governance and related asset services with staff across 80 offices worldwide. With more than $1.7 trillion in assets under administration, we deliver end-to-end solutions and exceptional service to meet our clients’ needs. For more information about Citco, please visit www.citco.com About the Team & Business Line: Citco Fund Services is a division of the Citco Group of Companies and is the largest independent administrator of Hedge Funds in the world. Our continuous investment in learning and technology solutions means our people are equipped to deliver a seamless client experience. This position reports in to the Loan Services Business Line As a core member of our Loan Services Data and Reporting team, you will be working with some of the industry’s most accomplished professionals to deliver award-winning services for complex fund structures that our clients can depend upon Job Duties in Brief: Your Role: Develop and execute database queries and conduct data analyses Create scripts to analyze and modify data, import/export scripts and execute stored procedures Model data by writing SQL queries/Python codes to support data integration and dashboard requirements Develop data pipelines that provide fast, optimized, and robust end-to-end solutions Leverage and contribute to design/building relational database schemas for analytics. Handle and manipulate data in various structures and repositories (data cube, data mart, data warehouse, data lake) Analyze, implement and contribute to building of APIs to improve data integration pipeline Perform data preparation tasks including data cleaning, normalization, deduplication, type conversion etc. Perform data integration through extracting, transforming and loading (ETL) data from various sources. Identify opportunities to improve processes and strategies with technology solutions and identify development needs in order to improve and streamline operations Create tabular reports, matrix reports, parameterized reports, visual reports/dashbords in a reporting application such as Power BI Desktop/Cloud or QLIK Integrating PBI/QLIK reports into other applications using embedded analytics like Power BI service (SaaS), or by API automation is also an advantage Implementation of NLP techniques for text representation, semantic extraction techniques, data structures and modelling Contribute to deployment and maintainence of machine learning solutions in production environments Building and Designing cloud applications using Microsoft Azure/AWS cloud technologies. About You: Background / Qualifications Bachelor’s Degree in technology/related field or equivalent work experience 4+ Years of SQL and/or Python experience is a must Strong knowledge of data concepts and tools and experienced in RDMS such as MS SQL Server, Oracle etc. Well-versed with concepts and techniques of Business Intelligence and Data Warehousing. Strong database designing and SQL skills. objects development, performance tuning and data analysis In-depth understanding of database management systems, OLAP & ETL frameworks Familiarity or hands on experience working with REST or SOAP APIs Well versed with concepts for API Management and Integration with various data sources in cloud platforms, to help with connecting to traditional SQL and new age data sources, such as Snowflake Familiarity with Machine Learning concepts like feature selection/deep learning/AI and ML/DL frameworks (like Tensorflow or PyTorch) and libraries (like scikit-learn, StatsModels) is an advantage Familiarity with BI technologies (e.g. Microsoft Power BI, Oracle BI) is an advantage Hands-on experience at least in one ETL tool (SSIS, Informatica, Talend, Glue, Azure Data factory) and associated data integration principles is an advantage Minimum 1+ year experience with Cloud platform technologies (AWS/Azure), including Azure Machine Learning is desirable. Following AWS experience is a plus: Implementing identity and access management (IAM) policies Managing user accounts with IAM Knowledge of writing infrastructure as code (IaC) using CloudFormation or Terraform. Implementing cloud storage using Amazon Simple Storage Service (S3) Experience with serverless approaches using AWS Lambda, e.g. AWS (SAM) Configuring Amazon Elastic Compute Cloud (EC2) Instances Previous Work Experience: Experience querying databases and strong programming skills: Python, SQL, PySpark etc. Prior experience supporting ETL production environments & web technologies such as XML is an advatange Previous working experience on Azure Data Services including ADF, ADLS, Blob, Data Bricks, Hive, Python, Spark and/or features of Azure ML Studio, ML Services and ML Ops is an advantage Experience with dashboard and reporting applications like Qlik, Tableau, Power BI Other: Well rounded individual possessing a high degree of initiative Proactive person willing to accept responsibility with very little hand-holding A strong analytical and logical mindset Demonstrated proficiency in interpersonal and communication skills including oral and written English. Ability to work in fast paced, complex Business & IT environments Knowledge of Loan Servicing and/or Loan Administration is an advantage Understanding of Agile/Scrum methodology as it relates to the software development lifecycle What We Offer: A rewarding and challenging environment that spans multiple geographies and multiple business lines Great working environment, competitive salary and benefits, and opportunities for educational support Be part of an industry leading global organisation, renowned for excellence Opportunities for personal and professional career development Our Benefits Your well-being is of paramount importance to us, and central to our success. We provide a range of benefits, training and education support, and flexible working arrangements to help you achieve success in your career while balancing personal needs. Ask us about specific benefits in your location. We embrace diversity, prioritizing the hiring of people from diverse backgrounds. Our inclusive culture is a source of pride and strength, fostering innovation and mutual respect. Citco welcomes and encourages applications from people with disabilities. Accommodations are available upon request for candidates taking part in all aspects of the selection .

Posted 1 week ago

Apply

1.0 years

0 Lacs

Delhi

On-site

About Consilium Software: Founded in 2007 Consilium Software is incorporated in Singapore, with software development and engineering labs in India, and subsidiaries and branch offices in India(New Delhi), Malaysia (Kuala Lumpur), Taiwan (Taipei City), Indonesia (Jakarta), Thailand (Bangkok), Australia (Melbourne) and Canada (Toronto.). Job brief We are looking for a Database developer to design stable and reliable databases, according to our company’s needs. You will be responsible for developing, testing, improving and maintaining new and existing databases to help users retrieve data effectively. You will work closely with developers to ensure system consistency. You will also collaborate with administrators and clients to provide technical support and identify new requirements. Communication and organization skills are keys for this position, along with a problem-solution attitude. Ultimately, you should be able to ensure our database systems run effectively and securely daily. Requirements · Proven work experience as a Database developer · In-depth understanding of data management (e.g. permissions, recovery, security, and monitoring) · Hands-on experience with SQL · Excellent analytical and organization skills · An ability to understand front-end users' requirements and a problem-solving attitude · Troubleshooting of critical bugs/issues. · Should have experience in Relational Database design practices, normalization techniques, ORM modeling. · Experience in writing complex queries, stored procedures in MSSQL or MySQL, and exposure to working on large databases would be preferred. · Should have skills and hands-on experience in generating reports from Multiple Data Stores, Master Data Management, ETL (Extract, Transform, and Load), and Data cleansing. · Knowledge of SSIS, SSAS, and SSRS. · Should be familiar with data replication, backup, restore, and archival practices. Other Behavioral / Attitude aspects: Dynamic, self-motivated and self-driven Team Player Having a sense of ownership and should possess good oral and written communication skills. Be willing to travel periodically, both Domestic and International Location – New Delhi Qualifications – B.Tech, B E or MCA Compensation – As per Industry standards Job Types: Full-time, Permanent Benefits: Cell phone reimbursement Flexible schedule Health insurance Internet reimbursement Life insurance Paid sick time Paid time off Provident Fund Schedule: Day shift Supplemental Pay: Performance bonus Quarterly bonus Yearly bonus Experience: Database development: 1 year (Required) SQL: 1 year (Preferred)

Posted 1 week ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Description The Global Data Insights and Analytics (GDI&A) department at Ford Motor Company is looking for qualified people who can develop scalable solutions to complex real-world problems using Machine Learning, Big Data, Statistics, Econometrics, and Optimization. The goal of GDI&A is to drive evidence-based decision making by providing insights from data. Applications for GDI&A include, but are not limited to, Connected Vehicle, Smart Mobility, Advanced Operations, Manufacturing, Supply chain, Logistics, and Warranty Analytics. Potential candidates should have hands-on experience in applying first principles methods, machine learning, data mining, and text mining techniques to build analytics prototypes that work on massive datasets. Candidate should have proficiency in Python, expertise in Entity extraction, document classification, Natural Language Processing (NLP) & Natural Language Generation (NLG). Experience of developing commonly used predictive models like - Linear Regression, Decision Trees, SVM. KNN, Gradient Boost, Random Forest etc. Proficiency in cloud platforms like GCP, including experience with deploying AI applications. Solid programming skills in Python and experience with relevant libraries and frameworks (e.g., TensorFlow, PyTorch, scikit-learn). Proven track record of delivering successful Gen-AI projects and driving business impact. Strong understanding of text pre-processing & normalization techniques such as tokenization Excellent communication, presentation, and documentation skills. Strong problem-solving abilities and a proactive attitude towards learning and adopting new technologies. Responsibilities Build data-driven models to understand the characteristics of engineering systems Apply machine learning, data mining and text mining techniques to create scalable solutions for business problems Train, tune, validate, and monitor predictive models Analyze and extract relevant information from large amounts of historical business data especially related to quality, product development, and connected vehicles, both in structured and unstructured formats Establish scalable, efficient, automated processes for large scale data analyses Package and present the findings and communicate with large cross-functional teams Qualifications BE, B.Tech, M.S. or Ph.D. in Engineering, Computer Science, Operations research, Statistics, Applied mathematics, or in a related field 5 years of experience in at least one of the following languages: Python, R 5 years of hands-on experience in using machine learning/text mining tools and techniques such as Clustering/classification/decision trees, Random forests, Support vector machines, Deep Learning, Neural networks, Reinforcement learning, and other numerical algorithms 1+ years of experience in developing LLM solutions for Enterprise Experience with GoogleCloud Platform (GCP) including VertexAI, BigQuery, DBT, NoSQL database and Hadoop Ecosystem Excellent problem solving, communication, and data presentation skills Show more Show less

Posted 1 week ago

Apply

Exploring Normalization Jobs in India

The job market for normalization roles in India is growing rapidly as more companies recognize the importance of data quality and consistency. Normalization jobs involve organizing and structuring data to eliminate redundancy and improve efficiency in database management. If you are considering a career in normalization, this article will provide you with valuable insights into the job market in India.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Hyderabad
  4. Pune
  5. Delhi

These cities are known for their thriving IT sectors and have a high demand for normalization professionals.

Average Salary Range

The average salary range for normalization professionals in India varies based on experience levels. Entry-level roles can expect to earn around INR 3-5 lakhs per annum, while experienced professionals can earn upwards of INR 10 lakhs per annum.

Career Path

A typical career path in normalization may involve starting as a Data Analyst, progressing to a Database Administrator, and eventually becoming a Data Architect or Database Manager. With experience and additional certifications, professionals can move into roles such as Data Scientist or Business Intelligence Analyst.

Related Skills

In addition to normalization skills, professionals in this field are often expected to have knowledge of database management systems, SQL, data modeling, data warehousing, and data analysis.

Interview Questions

  • What is normalization and why is it important? (basic)
  • Explain the difference between 1NF, 2NF, and 3NF. (medium)
  • How do you identify and resolve data anomalies in a database? (medium)
  • What is denormalization and when would you use it? (advanced)
  • Can you explain the benefits of using normalization in database design? (basic)
  • Describe the process of database normalization. (medium)
  • How do you handle redundant data in a database? (medium)
  • What are the limitations of normalization? (advanced)
  • How do you ensure data integrity in a normalized database? (medium)
  • What is the role of foreign keys in normalization? (medium)
  • Explain the concept of functional dependency in normalization. (medium)
  • How do you optimize database performance while maintaining normalization? (advanced)
  • Can you give an example of a database that is not normalized and explain how you would normalize it? (medium)
  • What is the difference between horizontal and vertical partitioning in database normalization? (advanced)
  • How do you handle updates and inserts in a normalized database? (medium)
  • Explain the concept of transitive dependency in normalization. (advanced)
  • What are the steps involved in normalization? (basic)
  • How do you determine the appropriate normalization level for a database? (medium)
  • How do you handle null values in a normalized database? (medium)
  • What are the common pitfalls to avoid in normalization? (advanced)
  • How do you ensure data consistency across normalized tables? (medium)
  • Can you explain the concept of referential integrity in normalization? (medium)
  • How do you normalize a database with composite keys? (advanced)
  • Describe the benefits of using normalization in a data warehouse environment. (medium)
  • How do you handle data migration in a normalized database? (medium)

Closing Remark

As you prepare for interviews and explore job opportunities in the field of normalization, remember to showcase your expertise in database management and data structuring. With the right skills and knowledge, you can excel in this dynamic and growing field in India. Good luck with your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies