Tampilkan postingan dengan label Hardware. Tampilkan semua postingan
Tampilkan postingan dengan label Hardware. Tampilkan semua postingan

Senin, 22 Desember 2014

Driver HP 430 Notebook PC | Windows 7 * 32 Bit




Link Driver HP 430 (Sudah dijadikan satu folder) tanpa ribet.
berikut adalah list di dalam link folder dibawah

Download Link : [[ Download Driver HP 430 Windows 7 32 Bit ]]

====================================================================
Download Per-Part Saja (Langsung menuju driver yang diinginkan)
-------------------------------------------------------------------------------------------------------------------
====================================================================
nb : Semoga Bermanfaat:)

Minggu, 08 Juli 2012

Bandwith Management | Linux

Untuk mengatasi rebutan bandwidth internet antar client dibutuhkan pembagian bandwidth, bisa dibayangkan jika tanpa pembagian bandwidth, jika satu cilent saja download menggunakan addons Downthemall milik mozilla firefox saja yang hanya bisa memecah proses download maksimal 10 part (1 client download dianggap ada 10 client melakukan permintaan download secara bersama), maka habislah bandwidth disedot oleh 1 client saja.

Apalagi dengan menggunakan download accelerator yang lain (getRight, IDM, DAP) yang bisa memecah proses download sampai 100 part. Untuk urusan bandwidth management (pembagian bandwidth) ini aplikasi yang cukup mudah digunakan di lingkungan Linux adalah HTB Tools.

Karena saya menggunakan Ubuntu 9.04 Jaunty Jackalope, maka saya akan sedikit sharing tentang instalasi HTB Tools di Ubuntu 9.04 Jaunty Jackalope.Pertama download paket HTB Tools terbaru di

http://htb-tools.skydevel.ro/download.php,

saya menggunakan HTB-tools-0.3.0a-i486-1.tgz ekstrak file nya, dengan perintah

$sudo tar -zxvf HTB-tools-0.3.0a-i486-1.tgz

Hasilnya akan muncul folder etc, install, sbin dan usr di /home/user/. Pindahkan isi semua folder $sbin nya HTB tools di /sbin server dengan perintah :

$sudo mv /home/user/sbin/htb /sbin

$sudo mv /home/user/sbin/htbgen /sbin

$sudo mv /home/user/sbin/q_checkcfg /sbin

$sudo mv /home/user/sbin/q_parser /sbin

$sudo mv /home/user/sbin/q_show /sbin

Pindahin folder htb di /home/user/etc ke /etc nya server dengan perintah :

$sudo mv /home/user/etc/htb /etc

Ubah nama file yang ada di /etc/htb dengan menghilangkan kata new dengan perintah :

$sudo mv /etc/htb/eth0-qos.cfg.new /etc/htb/eth0-qos.cfg

$sudo mv /etc/htb/eth1-qos.cfg.new /etc/htb/eth1-qos.cfg

Pindahkan file /home/user/etc/rc.d/rc.htb.new ke /etc/init.d/ dan ubah nama filenya menjadi rc.htb dengan perintah :

$sudo mv /home/user/etc/rc.d/rc.htb.new /etc/init.d/rc.htb

Ubah permision rc.htb agar bisa di eksekusi dengan perintah :

$sudo chmod 755 /etc/init.d/rc.htb

Mulai konfigurasi eth0-qos cfg dan eth1-qos cfg sesuai kebutuhan, Jika interface yang berhubungan langsung dengan client adalah eth1 maka yang kita konfigurasi eth1-qos cfg. $sudo nano /etc/htb/eth1-qos.cfg

-- contoh konfigurasi -- # Pengaturan bandwidth warnetku

class LAN_1 {

bandwidth 384;

limit 384;

burst 2;

priority 1;


que sfq;
client pc1 {

bandwidth 64;

limit 128;

burst 2;

priority 1;

src {

192.168.1.2/32;

};

};
client pc2 {
bandwidth 64;
limit 128;
burst 2;
priority 1;
src {192.168.1.3/32;
};
};

client pc3 {
bandwidth 64;
limit 128;
burst 2;
priority 1;
src {192.168.1.4/32;
};
};

client pc4 {

bandwidth 64;

limit 128;

burst 2;

priority 1;

src {

192.168.1.5/32;

};

};
client pc5 {

bandwidth 64;

limit 128;

burst 2;

priority 1;

src {

192.168.1.6/32;

};

};
client admin {

bandwidth 64;

limit 128;

burst 2;

priority 1;

src {

192.168.1.1/32;

};

};
};

Kalau konfigurasinya benar makan HTB Tools sudah bisa dijalankan, jalankan HTB Tools nya dengan perintah : $sudo /etc/init.d/rc.htb start_eth1 Kalau ingin HTB Tools tersebut bisa langsung dijalankan saat booting (komputer baru dinyalakan), tambahkan baris perintah tersebut di file /etc/rc.local Kalo mau lihat hasilnya jalan trafic nya bisa dilihat dengan perintah : $sudo /etc/init.d/rc.htb show_eth1 Kalo mau mematikan HTB Tools dengan perintah : $sudo /etc/init.d/rc.htb stop_eth1 Selamat mencoba, semoga sukses

kopas pyurrrrr dari :

http://www.hi-techmall.org/workshop/blog/pengaturan-bandwidth-internet-htb-tools-ubuntu

install smartfren ACM blablabla di linux

1. Pastikan bahwa wvdial sudah terinstall di Linux Mint.

2. Hubungkan SmartFren Connex ke komputer melalui USB Port.

3. Masuk ke Terminal kemudian ketikkan command line:

$ sudo eject /dev/sr0 (untuk komputer/laptop tanpa CD/DVD ROM)

$ sudo eject /dev/sr1 (untuk komputer/laptop dengan CD/DVD ROM)

masukkan password root

$ sudo rmmod usb_storage

$ sudo modprobe usbserial vendor=0x19d2 product=0xffdd

$ sudo wvdialconf

(Digunakan untuk mengecek apakah modem sudah terbaca atau belum.

Jika sudah terbaca, akan ada baris: found modem /dev/ttyUSB0.

Ingat ttyUSB tidak selalu ttyUSB0, bisa juga ttyUSB1, ttyUSB2, ...)

Jika sudah terbaca, silahkan ketik command line:

$ sudo gedit /etc/wvdial.conf

Konfigurasi wvdial akan terbuka di gedit silahkan hapus script dan kemudian ganti dengan script sesuai provider yang digunakan. Misal kita menggunakan smart, silahkan co-pas script berikut ini:

[Dialer smart]

Init1 = ATZ

Init2 = ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0

Modem Type = Analog Modem

Phone = #777

New PPPD = yes

ISDN = 0

Username = smart

Password = smart

Modem = /dev/ttyUSB0

Baud = 460800

Command Line = ATDT

Stupid Mode = 1

Simpan dan close gedit

Keterangan: Baris biru disesuaikan dengan ttUSB yang terbaca saat menggunakan $ sudo wvdialconf

4. Untuk melakukan koneksi, ketik command line:

$ sudo wvdial smart

5. Koneksi terhubung jika sudah muncul detail terkait koneksi di Terminal.

6. Silahkan menjelajah di internet.

Ingat:

Biarkan Terminal tetap terbuka, karena menutup Terminal sama dengan memutus koneksi internet yang telah terhubung.

install printer canon IP2770 | Ubuntu

Berawal dari saya yang pindah haluan dari pengguna Windows menjadi pengguna linux Ubuntu, saya kebingungan bagaimana caranya agar printer Canon iP2770 saya bisa berkomunikasi dengan linux Ubuntu saya. Akhirnya setelah searching Google, saya mendapatkan software driver printer Canon iP2770 linux version dari DewaBayu27. Allhamdulillah bisa saya terapkan. :) Dan pada kesempatan ini saya akan berbagi ilmu bagaimana menginstall driver printer Canon iP2770 di linux Ubuntu. Saya menggunakan Notebook Dell Inspiron 14R Core i3 dengan OS Ubuntu 10.10 saat ini, sebelumnya saya juga pernah memakai Ubuntu 10.04, dan software driver ini masih kompatibel dengan dua versi tersebut.

Berikut langkah-langkah bagaimana proses menginstall driver tersebut :
  1. Download dahulu driver printer iP2770 melalui : http://g4l4u.co.cc/fileupload/cnijfilter-ip2700series-3.30-1-i386-deb.tar.gz, Download via 4shared.com, Download via mediafire.com
  2. Selanjutnya silahkan buka console, di Ubuntu disebut terminal. Kemudian extract file dengan login sebagai root dan jalankan perintah berikut ini : #tar -zxvf cnijfilter-ip2700series-3.30-1-i386-deb.tar.gz
  3. Jika sudah, disini masih menggunakan console/terminal, masuk ke direktori hasil extract file tersebut, dan untuk menginstallnya silahkan jalankan perintah berikut : #./install.sh
  4. Pada tahap ini silahkan colokkan usb printer Canon iP2770 Anda dalam keadaan hidup, karena driver akan mencari hardware printer untuk bisa berkomunikasi. Yang penting ikuti instruksi dari proses penginstallan tersebut sampai selesai. Mudah kan teman-teman? :) Insya Allah, Anda sudah bisa menggunakannya untuk print dokumen. :)
Cara lain yang lebih mudah :
  1. Setelah Anda mendownload file driver Canon iP2770, silahkan extract file tersebut dengan cara kilk kanan pada file, kemudian pilih extract.
  2. Selanjutnya buka folder hasil extract-kan tersebut, dan cari file yang bernama install.sh
  3. Klik kanan file install.sh tersebut, kemudian masuk ke propertis dan pilih permissions, kemudian check saja "Allow executing file as program" kemudian di close.
  4. Siapkan printer dalam keadaan hidup dan colokkan usb ke PC atau Notebook Anda.
  5. Tahap selanjutnya, double klik file install.sh tersebut, dan pilih "run terminal"
  6. Ikuti instruksi dari proses installasi driver tersebut. Kemudian Anda bisa menggunakannya untuk keperluan print dokumen Anda. Mudah kan? :) Insya Allah cara ini yang paling mudah daripada cara yang di atas tadi.hmmmmm :)
Saya sengaja menjelaskan dengan dua cara agar teman-teman yang membutuhkan menjadi lebih jelas dan mempraktekkannya menjadi lebih mudah :) Selamat mencoba teman, semoga SUKSES...!!!
Regards,


copas pyur dari :
http://tirtamataram.blogspot.com/2011/04/cara-mudah-install-driver-printer-canon.html

Minggu, 29 Mei 2011

SmartPhone Dengan Stroke diagnosis

Apakah anda terkena gejala stroke atau sudah terkena stroke?. Kalau iya ini ada kabar gembira karena ada sebuah aplikasi handphone/smartphone yang dapat mendiagnosa penyakit stroke. Aplikasi ini dapat digunakan untuk iPhone atau android. Jadi dokter tidak perlu ke rumah sakit untuk mendiagnosa pasien stroke dan membuat resep untuk pengobatannya. Menurut sebuah studi yang dilakukan oleh para peneliti di Universitas Calgary. "Sekarang mereka dapat membawa seorang ahli untuk menanggulangi masalah stroke ini," kata Ross Mitchell, seorang profesor medis di universitas Kanada yang bekerja pada studi ini.

Para ahli medis telah menggunakan layar 3,5 inci, seperti yang ada di iPhone untuk diagnosis stroke dalam keadaan darurat. Ini berkat kemajuan dalam kompresi citra melalui mesin CT scan (computerized tomography), mikroprosesor dan bandwidth data wireless. Aplikasi ini disebut ResolutionMD Mobile dan dapat didownload melalui App Store untuk iPhone dan iPad atau di Pasar Android untuk ponsel yang menggunakan perangkat lunak Google. Ini sudah digunakan oleh beberapa ahli neuro-radiologi di rumah sakit di Eropa, kata Byron Osing, CEO of Calgary Scientific. Dikutip dari CNN.

di kutip dari http://didno76.blogspot.com dengan tujuan untuk berbagi informasi kepada pengunjung blog kamis-wage.blogspot.com

Rabu, 27 April 2011

BIOS (Basic Input and Output System)

In IBM PC Compatible computers, the basic input/output system (BIOS), also known as the System BIOS, is a de facto standard defining a firmware interface.[1]


Phoenix AwardBIOS CMOS (non-volatile memory) Setup utility on a standard PC

The BIOS software is built into the PC, and is the first code run by a PC when powered on ('boot firmware'). The primary function of the BIOS is to load and start an operating system. When the PC starts up, the first job for the BIOS is to initialize and identify system devices such as the video display card, keyboard and mouse, hard disk, CD/DVD drive and other hardware. The BIOS then locates software held on a peripheral device (designated as a 'boot device'), such as a hard disk or a CD, and loads and executes that software, giving it control of the PC.[2] This process is known as booting, or booting up, which is short for bootstrapping.

BIOS software is stored on a non-volatile ROM chip built into the system on the mother board. The BIOS software is specifically designed to work with the particular type of system in question, including having a knowledge of the workings of various devices that make up the complementary chipset of the system. In modern computer systems, the BIOS chip's contents can be rewritten allowing BIOS software to be upgraded.

A BIOS will also have a user interface (or UI for short). Typically this is a menu system accessed by pressing a certain key on the keyboard when the PC starts. In the BIOS UI, a user can:

  • configure hardware
  • set the system clock
  • enable or disable system components
  • select which devices are eligible to be a potential boot device
  • set various password prompts, such as a password for securing access to the BIOS UI functions itself and preventing malicious users from booting the system from unauthorized peripheral devices.

The BIOS provides a small library of basic input/output functions used to operate and control the peripherals such as the keyboard, text display functions and so forth, and these software library functions are callable by external software. In the IBM PC and AT, certain peripheral cards such as hard-drive controllers and video display adapters carried their own BIOS extension ROM, which provided additional functionality. Operating systems and executive software, designed to supersede this basic firmware functionality, will provide replacement software interfaces to applications.

The role of the BIOS has changed over time; today BIOS is a legacy system, superseded by the more complex Extensible Firmware Interface (EFI), but BIOS remains in widespread use, and EFI booting has only been supported in 64-bit x86 Windows since 2008. BIOS is primarily associated with the 16-bit, 32-bit, and the beginning of the 64-bit architecture eras, while EFI is used for some newer 32-bit and 64-bit architectures. Today BIOS is primarily used for booting a system, and for certain additional features such as power management (ACPI), video initialization (in X.org); but otherwise is not used during the ordinary running of a system, while in early systems (particularly in the 16-bit era), BIOS was used for hardware access – operating systems (notably MS-DOS) would call the BIOS rather than directly accessing the hardware. In the 32-bit era and later, operating systems instead generally directly accessed the hardware using their own device drivers. However, the distinction between BIOS and EFI is rarely made in terminology by the average computer user, making BIOS a catch-all term for both systems.

Contents

[hide]

Terminology

The term first appeared in the CP/M operating system, describing the part of CP/M loaded during boot time that interfaced directly with the hardware (CP/M machines usually had only a simple boot loader in their ROM). Most versions of DOS have a file called "IBMBIO.COM" or "IO.SYS" that is analogous to the CP/M BIOS.

Among other classes of computers, the generic terms boot monitor, boot loader or boot ROM were commonly used. Some Sun and PowerPC-based computers use Open Firmware for this purpose. There are a few alternatives for Legacy BIOS in the x86 world: Extensible Firmware Interface, Open Firmware (used on the OLPC XO-1) and coreboot.

IBM PC-compatible BIOS chips

In principle, the BIOS in ROM was customized to the particular manufacturer's hardware, allowing low-level services (such as reading a keystroke or writing a sector of data to diskette) to be provided in a standardized way to the operating system. For example, an IBM PC might have had either a monochrome or a color display adapter, using different display memory addresses and hardware - but the BIOS service to print a character on the screen in text mode would be the same.

Boot Block
DMI Block
Main Block


PhoenixBIOS D686. This BIOS chip is housed in a PLCC package, which is, in turn, plugged into a PLCC socket.

Prior to the early 1990s, BIOSes were stored in ROM or PROM chips, which could not be altered by users. As its complexity and need for updates grew, and re-programmable parts became more available, BIOS firmware was most commonly stored on EEPROM or flash memory devices. According to Robert Braver, the president of the BIOS manufacturer Micro Firmware, Flash BIOS chips became common around 1995 because the electrically erasable PROM (EEPROM) chips are cheaper and easier to program than standard erasable PROM (EPROM) chips. EPROM chips may be erased by prolonged exposure to ultraviolet light, which accessed the chip via the window. Chip manufacturers use EPROM programmers (blasters) to program EPROM chips. Electrically erasable (EEPROM) chips come with the additional feature of allowing a BIOS reprogramming via higher-than-normal amounts of voltage.[3] BIOS versions are upgraded to take advantage of newer versions of hardware and to correct bugs in previous revisions of BIOSes.[4]

Beginning with the IBM AT, PCs supported a hardware clock settable through BIOS. It had a century bit which allowed for manually changing the century when the year 2000 happened. Most BIOS revisions created in 1995 and nearly all BIOS revisions in 1997 supported the year 2000 by setting the century bit automatically when the clock rolled past midnight, December 31, 1999.[5]

The first flash chips were attached to the ISA bus. Starting in 1997, the BIOS flash moved to the LPC bus, a functional replacement for ISA, following a new standard implementation known as "firmware hub" (FWH). In 2006, the first systems supporting a Serial Peripheral Interface (SPI) appeared, and the BIOS flash moved again.

The size of the BIOS, and the capacities of the ROM, EEPROM and other media it may be stored on, has increased over time as new features have been added to the code; BIOS versions now exist with sizes up to 16 megabytes. Some modern motherboards are including even bigger NAND Flash ROM ICs on board which are capable of storing whole compact operating system distribution like some Linux distributions. For example, some recent ASUS motherboards included SplashTop Linux embedded into their NAND Flash ROM ICs.

Flashing the BIOS

In modern PCs the BIOS is stored in rewritable memory, allowing the contents to be replaced or 'rewritten'. This rewriting of the contents is sometimes termed 'flashing'. This is done by a special program, usually provided by the system's manufacturer. A file containing such contents is sometimes termed 'a BIOS image'. A BIOS might be reflashed in order to upgrade to a newer version to fix bugs or provide improved performance or to support newer hardware, or a reflashing operation might be needed to fix a damaged BIOS.

BIOS chip vulnerabilities



An American Megatrends BIOS registering the “Intel CPU uCode Error” while doing POST, most likely a problem with the POST.

EEPROM chips are advantageous because they can be easily updated by the user; hardware manufacturers frequently issue BIOS updates to upgrade their products, improve compatibility and remove bugs. However, this advantage had the risk that an improperly executed or aborted BIOS update could render the computer or device unusable. To avoid these situations, more recent BIOSes use a "boot block"; a portion of the BIOS which runs first and must be updated separately. This code verifies if the rest of the BIOS is intact (using hash checksums or other methods) before transferring control to it. If the boot block detects any corruption in the main BIOS, it will typically warn the user that a recovery process must be initiated by booting from removable media (floppy, CD or USB memory) so the user can try flashing the BIOS again. Some motherboards have a backup BIOS (sometimes referred to as DualBIOS boards) to recover from BIOS corruptions.

Overclocking

Some BIOS chips allow overclocking, an action in which the CPU is adjusted to a higher clock rate than its factory preset. Overclocking may, however, seriously compromise system reliability in insufficiently cooled computers and generally shorten component lifespan.

Virus attacks

There are at least three known BIOS attack viruses, two of which were for demonstration purposes.

CIH

The first was a virus which was able to erase Flash ROM BIOS content, rendering computer systems unstable. CIH, also known as "Chernobyl Virus", appeared for the first time in mid-1998 and became active in April 1999. It affected systems' BIOS's and often they could not be fixed on their own since they were no longer able to boot at all. To repair this, Flash ROM IC had to be removed from the motherboard to be reprogrammed elsewhere. Damage from CIH was possible since the virus was specifically targeted at the then widespread Intel i430TX motherboard chipset, and the most common operating systems of the time were based on the Windows 9x family allowing direct hardware access to all programs.

Modern systems are not vulnerable to CIH because of a variety of chipsets being used which are incompatible with the Intel i430TX chipset, and also other Flash ROM IC types. There is also extra protection from accidental BIOS rewrites in the form of boot blocks which are protected from accidental overwrite or dual and quad BIOS equipped systems which may, in the event of a crash, use a backup BIOS. Also, all modern operating systems like Linux, Mac OS X, Windows NT-based Windows OS like Windows 2000, Windows XP and newer, do not allow user mode programs to have direct hardware access. As a result, as of 2008, CIH has become essentially harmless, at worst causing annoyance by infecting executable files and triggering alerts from antivirus software. Other BIOS viruses remain possible, however[6]: since most Windows users run all applications with administrative privileges, a modern CIH-like virus could in principle still gain access to hardware.

Black Hat 2006

The second one was a technique presented by John Heasman, principal security consultant for UK based Next-Generation Security Software at the Black Hat Security Conference (2006), where he showed how to elevate privileges and read physical memory, using malicious procedures that replaced normal ACPI functions stored in flash memory.

Persistent BIOS Infection

The third one, known as "Persistent BIOS infection", was a method presented in CanSecWest Security Conference (Vancouver, 2009) and SyScan Security Conference (Singapore, 2009) where researchers Anibal Sacco [7] and Alfredo Ortega, from Core Security Technologies, demonstrated insertion of malicious code into the decompression routines in the BIOS, allowing for nearly full control of the PC at every start-up, even before the operating system is booted.

The proof-of-concept does not exploit a flaw in the BIOS implementation, but only involves the normal BIOS flashing procedures. Thus, it requires physical access to the machine or for the user on the operating system to be root. Despite this, however, researchers underline the profound implications of their discovery: “We can patch a driver to drop a fully working rootkit. We even have a little code that can remove or disable antivirus.”[8]

Firmware on adapter cards

A computer system can contain several BIOS firmware chips. The motherboard BIOS typically contains code to access hardware components absolutely necessary for bootstrapping the system, such as the keyboard (either PS/2 or on a USB human interface device), and storage (floppy drives, if available, and IDE or SATA hard disk controllers). In addition, plug-in adapter cards such as SCSI, RAID, Network interface cards, and video boards often include their own BIOS (e.g. Video BIOS), complementing or replacing the system BIOS code for the given component. (This code is generally referred to as an option ROM.) Even devices built into the motherboard can behave in this way; their option ROMs can be stored as separate code on the main BIOS flash chip, and upgraded either in tandem with, or separately to, the main BIOS.

An add-in card usually only requires an option ROM if it:

  • Needs to be used before the operating system can be loaded (usually this means it is required in the bootstrapping process), and
  • Is too sophisticated or specific a device to be handled by the main BIOS

Older PC operating systems, such as MS-DOS (including all DOS-based versions of Microsoft Windows), and early-stage bootloaders, may continue to use the BIOS for input and output. However, the restrictions of the BIOS environment means that modern OSes will almost always use their own device drivers to directly control the hardware. Generally, these device drivers only use BIOS and option ROM calls for very specific (non-performance-critical) tasks, such as preliminary device initialization.

In order to discover memory-mapped option ROMs during the boot process, PC BIOS implementations scan real memory from 0xC0000 to 0xF0000 on 2 KiB boundaries, looking for a ROM signature: 0xAA55 (0x55 followed by 0xAA, since the x86 architecture is little-endian). In a valid expansion ROM, this signature is immediately followed by a single byte indicating the number of 512-byte blocks it occupies in real memory. The next byte contains an offset describing the option ROM's entry point, to which the BIOS immediately transfers control. At this point, the expansion ROM code takes over, using BIOS services to register interrupt vectors for use by post-boot applications, provide a user configuration interface, or display diagnostic information.

There are many methods and utilities for examining the contents of various motherboard BIOS and expansion ROMs, such as Microsoft DEBUG or the UNIX dd.

BIOS boot specification

If the expansion ROM wishes to change the way the system boots (such as from a network device or a SCSI adapter for which the BIOS has no driver code), it can use the BIOS Boot Specification (BBS) API to register its ability to do so. Once the expansion ROMs have registered using the BBS APIs, the user can select among the available boot options from within the BIOS's user interface. This is why most BBS compliant PC BIOS implementations will not allow the user to enter the BIOS's user interface until the expansion ROMs have finished executing and registering themselves with the BBS API.[citation needed]

Changing role of the BIOS

Some operating systems, for example MS-DOS, rely on the BIOS to carry out most input/output tasks within the PC.[9] A variety of technical reasons makes it inefficient for some recent operating systems written for 32-bit CPUs such as Linux and Microsoft Windows to invoke the BIOS directly. Larger, more powerful, servers and workstations using PowerPC or SPARC CPUs by several manufacturers developed a platform-independent Open Firmware (IEEE-1275), based on the Forth programming language. It is included with Sun's SPARC computers, IBM's RS/6000 line, and other PowerPC CHRP motherboards. Later x86-based personal computer operating systems, like Windows NT, use their own, native drivers which also makes it much easier to extend support to new hardware, while the BIOS still relies on a legacy 16-bit real mode runtime interface.

There was a similar transition for the Apple Macintosh, where the system software originally relied heavily on the ToolBox—a set of drivers and other useful routines stored in ROM based on Motorola's 680x0 CPUs. These Apple ROMs were replaced by Open Firmware in the PowerPC Macintosh, then EFI in Intel Macintosh computers.

Later BIOS took on more complex functions, by way of interfaces such as ACPI; these functions include power management, hot swapping, thermal management. However BIOS limitations (16-bit processor mode, only 1 MiB addressable space, PC AT hardware dependencies, etc.) were seen as clearly unacceptable for the newer computer platforms. Extensible Firmware Interface (EFI) is a specification which replaces the runtime interface of the legacy BIOS. Initially written for the Itanium architecture, EFI is now available for x86 and x86-64 platforms; the specification development is driven by The Unified EFI Forum, an industry Special Interest Group.

Linux has supported EFI via the elilo boot loader. The Open Source community increased their effort to develop a replacement for proprietary BIOSes and their future incarnations with an open sourced counterpart through the coreboot and OpenBIOS/Open Firmware projects. AMD provided product specifications for some chipsets, and Google is sponsoring the project. Motherboard manufacturer Tyan offers coreboot next to the standard BIOS with their Opteron line of motherboards. MSI and Gigabyte Technology have followed suit with the MSI K9ND MS-9282 and MSI K9SD MS-9185 resp. the M57SLI-S4 models.

Some BIOSes contain a "SLIC", a digital signature placed inside the BIOS by the manufacturer, for example Dell. This SLIC is inserted in the ACPI table and contains no active code. Computer manufacturers that distribute OEM versions of Microsoft Windows and Microsoft application software can use the SLIC to authenticate licensing to the OEM Windows Installation disk and/or system recovery disc containing Windows software. Systems having a SLIC can be activated with an OEM Product Key, and they verify an XML formatted OEM certificate against the SLIC in the BIOS as a means of self-activating. If a user performs a fresh install of Windows, they will need to have possession of both the OEM key and the digital certificate for their SLIC in order to bypass activation; in practice this is extremely unlikely and hence the only real way this can be achieved is if the user performs a restore using a pre-customised image provided by the OEM.

Recent Intel processors (P6 and P7) have reprogrammable microcode. The BIOS may contain patches to the processor code to allow errors in the initial processor code to be fixed, updating the processor microcode each time the system is powered up. Otherwise, an expensive processor swap would be required.[10] For example, the Pentium FDIV bug became an expensive fiasco for Intel that required a product recall because the original Pentium did not have patchable microcode.

The BIOS business

The vast majority of PC motherboard suppliers license a BIOS "core" and toolkit from a commercial third-party, known as an "independent BIOS vendor" or IBV. The motherboard manufacturer then customizes this BIOS to suit its own hardware. For this reason, updated BIOSes are normally obtained directly from the motherboard manufacturer.

Major BIOS vendors include American Megatrends (AMI), Insyde Software, Phoenix Technologies and Byosoft. Former vendors include Award Software and Microid Research which were acquired by Phoenix Technologies in 1998. Phoenix has now phased out the Award Brand name. General Software, which was also acquired by Phoenix in 2007, sold BIOS for Intel processor based embedded systems.

Overclocking




"Overclocked" redirects here. For other uses, see Overclocked (disambiguation).
AMD Athlon XP overclocking BIOS setup on ABIT NF7-S. Front side bus frequency (external clock) has increased from 133 MHz to 148 MHz, and the clock multiplier factor has changed from 13.5 to 16.5

Overclocking is the process of running a computer component at a higher clock rate (more clock cycles per second) than it was designed for or was specified by the manufacturer, usually practiced by enthusiasts seeking an increase in the performance of their computers. Some purchase low-end computer components which they then overclock to higher clock rates, or overclock high-end components to attain levels of performance beyond the specified values. Others overclock outdated components to keep pace with new system requirements, rather than purchasing new hardware.[1]

People who overclock their components mainly focus their efforts on processors, video cards, motherboard chipsets, and random-access memory (RAM). It is done through manipulating the CPU multiplier and the motherboard's front side bus (FSB) clock rate until a maximum stable operating frequency is reached, although with the introduction of Intel's new X58 chipset and the Core i7 processor, the front side bus has been replaced with the QPI (Quick Path Interconnect); often this is called the Baseclock (BCLK). While the idea is simple, variation in the electrical and physical characteristics of computing systems complicates the process. CPU multipliers, bus dividers, voltages, thermal loads, cooling techniques and several other factors such as individual semiconductor clock and thermal tolerances can affect it.[2]


Considerations

There are several considerations when overclocking. First is to ensure that the component is supplied with adequate power to operate at the new clock rate. However, supplying the power with improper settings or applying excessive voltage can permanently damage a component. Since tight tolerances are required for overclocking, only more expensive motherboards—with advanced settings that computer enthusiasts are likely to use—have built-in overclocking capabilities. Motherboards with fewer features, such as those found in Original Equipment Manufacturer (OEM) systems, often do not support overclocking.

Cooling
Main article: Computer cooling
High quality heatsinks are often made of copper

All electronic circuits produce heat generated by the movement of electrical current. As clock frequencies in digital circuits and voltage applied increase, the heat generated by components running at the higher performance levels also increases. The relationship between clock frequencies and Thermal design power (TDP) are linear. However, there is a limit to the maximum frequency which is called a "wall". To overcome this issue, overclockers raise the chip voltage to increase the overclocking potential. The relationship between chip voltage and TDP is exponential due to the fact that as the chip warms, the resistance increases. This increased heat requires effective cooling to avoid damaging the hardware. In addition, some digital circuits slow down at high temperatures due to changes in MOSFET device characteristics. Because most stock cooling systems are designed for the amount of power produced during non-overclocked use, overclockers typically turn to more effective cooling solutions, such as powerful fans, larger heatsinks, heat pipes and water cooling. Size, shape, and material all influence the ability of a heatsink to dissipate heat. Efficient heatsinks are often made entirely of copper, which has high thermal conductivity, but is expensive.[3] Aluminium is more widely used; it has poorer thermal conductivity, but is significantly cheaper than copper. Heat pipes are commonly used to improve conductivity. Many heatsinks combine two or more materials to achieve a balance between performance and cost.[3]
Interior of a water-cooled computer, showing CPU water block, tubing, and pump.

Water cooling carries waste heat to a radiator. Thermoelectric cooling devices, also known as Peltier devices, are recently popular with the onset of high Thermal Design Power (TDP) processors made by Intel and AMD. Thermoelectric cooling devices create temperature differences between two plates by running an electric current through the plates. This method of cooling is highly effective, but itself generates significant heat. For this reason, it is often necessary to supplement thermoelectric cooling devices with a convection-based heatsink or a water-cooling system.
Liquid nitrogen may be used for cooling an overclocked system, when an extreme measure of cooling is needed.

Other cooling methods are forced convection and phase change cooling which is used in refrigerators and can be adapted for computer use. Liquid nitrogen, liquid helium, and dry ice are used as coolants in extreme cases,[4] such as record-setting attempts or one-off experiments rather than cooling an everyday system. In June 2006, IBM and Georgia Institute of Technology jointly announced a new record in silicon-based chip clock rate above 500 GHz, which was done by cooling the chip to 4.5 K (−268.7 °C; −451.6 °F) using liquid helium.[5] These extreme methods are generally impractical in the long term, as they require refilling reservoirs of vaporizing coolant, and condensation can be formed on chilled components.[4] Moreover, silicon-based junction gate field-effect transistors (JFET) will degrade below temperatures of roughly 100 K (−173 °C; −280 °F) and eventually cease to function or "freeze out" at 40 K (−233 °C; −388 °F) since the silicon ceases to be semiconducting[6] so using extremely cold coolants may cause devices to fail.

Submersion cooling, used by the Cray-2 supercomputer, involves sinking a part of computer system directly into a chilled liquid that is thermally conductive but has low electrical conductivity. The advantage of this technique is that no condensation can form on components.[7] A good submersion liquid is Fluorinert made by 3M, which is expensive and can only be purchased with a permit.[7] Another option is mineral oil, but impurities such as those in water might cause it to conduct electricity.[7]

Stability and functional correctness
See also: Stress testing#hardware

As an overclocked component operates outside of the manufacturer's recommended operating conditions, it may function incorrectly, leading to system instability. Another risk is silent data corruption by undetected errors. Such failures might never be correctly diagnosed and may instead be incorrectly attributed to software bugs in applications or the operating system. Overclocked use may permanently damage components enough to cause them to misbehave (even under normal operating conditions) without becoming totally unusable.

In general, overclockers claim that testing can ensure that an overclocked system is stable and functioning correctly. Although software tools are available for testing hardware stability, it is generally impossible for any private individual to thoroughly test the functionality of a processor.[8] Achieving good fault coverage requires immense engineering effort; even with all of the resources dedicated to validation by manufacturers, faulty components and even design faults are not always detected.

A particular "stress test" can verify only the functionality of the specific instruction sequence used in combination with the data and may not detect faults in those operations. For example, an arithmetic operation may produce the correct result but incorrect flags; if the flags are not checked, the error will go undetected.

To further complicate matters, in process technologies such as silicon on insulator, devices display hysteresis—a circuit's performance is affected by the events of the past, so without carefully targeted tests it is possible for a particular sequence of state changes to work at overclocked rates in one situation but not another even if the voltage and temperature are the same. Often, an overclocked system which passes stress tests experiences instabilities in other programs.[9]

In overclocking circles, "stress tests" or "torture tests" are used to check for correct operation of a component. These workloads are selected as they put a very high load on the component of interest (e.g. a graphically-intensive application for testing video cards, or different math-intensive applications for testing general CPUs). Popular stress tests include Prime95, Everest, Superpi, OCCT, IntelBurnTest/Linpack/LinX, SiSoftware Sandra, BOINC, Intel Thermal Analysis Tool and Memtest86. The hope is that any functional-correctness issues with the overclocked component will show up during these tests, and if no errors are detected during the test, the component is then deemed "stable". Since fault coverage is important in stability testing, the tests are often run for long periods of time, hours or even days. An overclocked computer is sometimes described using the number of hours and the stability program used, such as "prime 12 hours stable".

Factors allowing overclocking

Overclockability arises in part due to the economics of the manufacturing processes of CPUs and other components. In most cases components with different rated clock rates are manufactured by the same process, and tested after manufacture to determine their actual ratings. The clock rate that the component is rated for is at or below the clock rate at which the CPU has passed the manufacturer's functionality tests when operating in worst-case conditions (for example, the highest allowed temperature and lowest allowed supply voltage). Manufacturers must also leave additional margin for reasons discussed below. Sometimes manufacturers produce more high-performing parts than they can sell, so some are marked as medium-performance chips to be sold for medium prices. Pentium architect Bob Colwell calls overclocking an "uncontrolled experiment in better-than-worst-case system operation".[10]

Measuring effects of overclocking

Benchmarks are used to evaluate performance. The benchmarks can themselves become a kind of 'sport', in which users compete for the highest scores. As discussed above, stability and functional correctness may be compromised when overclocking, and meaningful benchmark results depend on correct execution of the benchmark. Because of this, benchmark scores may be qualified with stability and correctness notes (e.g. an overclocker may report a score, noting that the benchmark only runs to completion 1 in 5 times, or that signs of incorrect execution such as display corruption are visible while running the benchmark). A widely used test of stability is Prime 95 as this has in-built error checking and the computer fails if unstable.

Given only benchmark scores it may be difficult to judge the difference overclocking makes to the overall performance of a computer. For example, some benchmarks test only one aspect of the system, such as memory bandwidth, without taking into consideration how higher clock rates in this aspect will improve the system performance as a whole. Apart from demanding applications such as video encoding, high-demand databases and scientific computing, memory bandwidth is typically not a bottleneck, so a great increase in memory bandwidth may be unnoticeable to a user depending on the applications used. Other benchmarks, such as 3DMark attempt to replicate game conditions.

Variance

The extent to which a particular part will overclock is highly variable. Processors from different vendors, production batches, steppings, and individual units will all overclock with varying degrees of success.

Manufacturer and vendor overclocking

Commercial system builders or component resellers sometimes overclock to sell items at higher profit margins. The retailer makes more money by buying lower-value components, overclocking them, and selling them at prices appropriate to a non-overclocked system at the new clock rate. In some cases an overclocked component is functionally identical to a non-overclocked one of the new clock rate, however, if an overclocked system is marketed as a non-overclocked system (it is generally assumed that unless a system is specifically marked as overclocked, it is not overclocked), it is considered fraudulent.

Overclocking is sometimes offered as a legitimate service or feature for consumers, in which a manufacturer or retailer tests the overclocking capability of processors, memory, video cards, and other hardware products. Several video card manufactures now offer factory overclocked versions of their graphics accelerators, complete with a warranty, which offers an attractive solution for enthusiasts seeking an improved performance without sacrificing common warranty protections. Such factory-overclocked products may cost a little more than standard components, but may be more cost-effective than product with a higher specification.

Naturally, manufacturers would prefer enthusiasts to pay additional money for profitable high-end products, in addition to concerns of less reliable components and shortened product life spans affecting brand image. It is speculated that such concerns are often motivating factors for manufacturers to implement overclocking prevention mechanisms such as CPU locking. These measures are sometimes marketed as a consumer protection benefit, which typically generates a negative reception from overclocking enthusiasts.

Advantages

The user can, in many cases, purchase a lower performance, cheaper component and overclock it to the clock rate of a more expensive component.
Higher performance in games, encoding, video editing applications, and system tasks at no additional expense, but at an increased cost for electrical power consumption. Particularly for enthusiasts who regularly upgrade their hardware, overclocking can increase the time before an upgrade is needed.
Some systems have "bottlenecks," where small overclocking of a component can help realize the full potential of another component to a greater percentage than the limiting hardware is overclocked. For instance, many motherboards with AMD Athlon 64 processors limit the clock rate of four units of RAM to 333 MHz. However, the memory performance is computed by dividing the processor clock rate (which is a base number times a CPU multiplier, for instance 1.8 GHz is most likely 9x200 MHz) by a fixed integer such that, at a stock clock rate, the RAM would run at a clock rate near 333 MHz. Manipulating elements of how the processor clock rate is set (usually lowering the multiplier), one can often overclock the processor a small amount, around 100-200 MHz (less than 10%), and gain a RAM clock rate of 400 MHz (20% increase), releasing the full potential of the RAM.
Overclocking can be an engaging hobby in itself and supports many dedicated online communities. The PCMark website is one such site that hosts a leader-board for the most powerful computers to be bench-marked using the program.
A new overclocker with proper research and precaution or a guiding hand can gain useful knowledge and hands-on experience about their system and PC systems in general.

Disadvantages

Many of the disadvantages of overclocking can be mitigated or reduced in severity by skilled overclockers. However, novice overclockers may make mistakes while overclocking which can introduce avoidable drawbacks and which are more likely to damage the overclocked components (as well as other components they might affect).

General

The lifespan of a processor may be reduced by higher operating frequencies, increased voltages and heat, although processors rapidly become obsolete in performance due to technological progress.
Increased clock rates and/or voltages result in higher power consumption.
While overclocked systems may be tested for stability before use using programs that "burn" the computer, these programs create an artificial strain that pushes individual or many components to their maximum (or beyond it). Some common stability programs are Prime 95, Super PI (32M), Intel TAT, LinX, PCMark, FurMark and OCCT. Stability problems may surface after prolonged usage due to new workloads or untested portions of the processor core. Aging effects previously discussed may also result in stability problems after a long period of time. Even when a computer appears to be working normally, problems may arise in the future. For example, Windows may appear to work with no problems, but when it is re-installed or upgraded, error messages may be received such as a “file copy error" during Windows Setup [11]. Microsoft says this of errors in upgrading to Windows XP: "Your computer [may be] over-clocked. Because over-clocking is very memory-intensive, decoding errors may occur when files are extracted from the Windows XP CD-ROM".
High-performance fans used for extra cooling can be noisy. Older popular models of fans used by overclockers can produce 50 decibels or more. However, nowadays, manufacturers are overcoming this problem by designing fans with aerodynamically optimized blades for smoother airflow and minimal noise (around 20 decibels at approximately 1 metre). The noise is not always acceptable, and overclocked machines are often much noisier than stock machines. Noise can be reduced by utilizing strategically-placed larger fans, which are inherently less noisy than smaller fans; by using alternative cooling methods (such as liquid and phase-change cooling); by lining the chassis with foam insulation; and by installing a fan-controlling bus to adjust fan speed (and, as a result, noise) to suit the task at hand. Now that overclocking is of interest to a larger target audience, this is less of a concern as manufacturers have begun researching and producing high-performance fans that are no longer as loud as their predecessors. Similarly, mid- to high-end PC cases now implement larger fans (to provide better airflow with less noise) as well as being designed with cooling and airflow in mind.
Even with adequate CPU cooling, the excess heat produced by an overclocked processing unit increases the ambient air temperature of the system case; consequently, other components may be affected. Also, more heat will be expelled from the PC's vents, raising the temperature of the room the PC is in - sometimes to uncomfortable levels.
Overclocking has the potential to cause component failure ("heat death"). Most warranties do not cover damage caused by overclocking. Some motherboards offer safety measures that will stop this from happening (e.g. limitations on FSB increase) so that only voltage control alterations can cause such harm.
Some motherboards are designed to use the airflow from a standard cpu fan in order to cool other heatsinks, such as the northbridge. If the cpu heatsink is changed on such boards, other heatsinks may receive insufficient cooling.
Overclocking a PC component may void its warranty (depending on the conditions of sale).
Changing the Heatsink on a Graphics Card often voids its warranty

Incorrectly performed overclocking

Increasing the operation frequency of a component will usually increase its thermal output in a linear fashion, while an increase in voltage usually causes heat to increase quadratically. Excessive voltages or improper cooling may cause chip temperatures to rise almost instantaneously, causing the chip to be damaged or destroyed.
More common than hardware failure is functional incorrectness. Although the hardware is not permanently damaged, this is inconvenient and can lead to instability and data loss. In rare, extreme cases entire filesystem failure may occur, causing the loss of all data.[12]
With poor placement of fans, turbulence and vortices may be created in the computer case, resulting in reduced cooling effectiveness and increased noise. In addition, improper fan mounting may cause rattling or vibration.
Improper installation of exotic cooling solutions like liquid cooling may result in failure of the cooling system, which may result in water damage.
With sub-zero cooling methods such as phase-change cooling or liquid nitrogen, extra precautions such as foam or spray insulation must be made to prevent water from condensing upon the PCB and other areas. This can cause the board to become "frosted" or covered in frost. While the water is frozen it is usually safe, however once it melts it can cause shorts and other malignant issues.
Sometimes products claim to be intended specifically for overclocking and may be just decoration. Novice buyers should be aware of the marketing hype surrounding some products. Examples include heat spreaders and heatsinks designed for chips which do not generate enough heat to benefit from these devices (capacitors, for example).

Limitations

The utility of overclocking is limited for a few reasons:

Personal computers are mostly used for tasks which are not computationally demanding, or which are performance-limited by bottlenecks outside of the local machine. For example, web browsing does not require a high performance computer, and the limiting factor will almost certainly be the bandwidth of the Internet connection of either the user or the server. Overclocking a processor will also do little to help increase application loading times as the limiting factor is reading data off the hard drive. Other general office tasks such as word processing and sending email are more dependent on the efficiency of the user than on the performance of the hardware. In these situations any performance increases through overclocking are unlikely to be noticeable.
It is generally accepted that, even for computationally-heavy tasks, clock rate increases of less than ten percent are difficult to discern. For example, when playing video games, it is difficult to discern an increase from 60 to 66 frames per second (FPS) without the aid of an on-screen frame counter. Overclocking of a processor will rarely improve gaming performance noticeably, as the frame rates achieved in most modern games are usually bound by the GPU at resolutions beyond 1024x768. One exception to this rule is when the overclocked component is the bottleneck of the system, in which case the most gains can be seen.

Graphics cards
The BFG GeForce 6800GSOC ships with higher memory and clock rates than the standard 6800GS.

Graphics cards can also be overclocked,[13] with utilities such as EVGA's Precision, RivaTuner, ATI Overdrive (on ATI cards only), MSI Afterburner, Zotac Firestorm on Zotac cards, or the PEG Link Mode on ASUS motherboards. Overclocking a GPU will often yield a marked increase in performance in synthetic benchmarks, and usually will improve game performance too. Sometimes, it is possible to see that a graphics card is pushed beyond its limits before any permanent damage is done by observing on-screen distortions known as artifacts. Two such discriminated "warning bells" are widely understood: green-flashing, random triangles appearing on the screen usually correspond to overheating problems on the GPU itself, while white, flashing dots appearing randomly (usually in groups) on the screen often mean that the card's RAM is overheating. It is common to run into one of those problems when overclocking graphics cards. Showing both symptoms at the same time usually means that the card is severely pushed beyond its heat/clock rate/voltage limits. If seen at normal clock rate, voltage and temperature, they may indicate faults with the card itself. However, if the video card is simply clocked too high and doesn't overheat then the artifacts are a bit different. There are many different ways for this to show up and any irregularities should be considered but usually if the core is pushed too hard black circles or blobs appear on the screen and overclocking the video memory beyond its limits usually results in the application or the entire operating system crashing. Luckily though, after the computer is restarted the settings is reset to stock (Stored in the video card BIOS) and the maximum clock rate of that specific card has been found.

Some overclockers use a hardware voltage modification where a potentiometer is applied to the video card to manually adjust the voltage. This results in much greater flexibility, as overclocking software for graphics cards is rarely able to freely adjust the voltage. Voltage mods are very risky and may result in a dead video card, especially if the voltage modification ("voltmod") is applied by an inexperienced individual. A pencil volt mod refers to changing a resistor's value on the graphics card by drawing across it with a graphite pencil. This results in a change of GPU voltage. It is also worth mentioning that adding physical elements to the video card immediately voids the warranty.


Alternatives

Flashing and Unlocking are two popular ways to gain performance out of a video card, without technically overclocking.

Flashing refers to using the firmware of another card, based on the same core and design specs, to "override" the original firmware, thus effectively making it a higher model card; however, 'flashing' can be difficult, and sometimes a bad flash can be irreversible. Sometimes stand-alone software to modify the firmware files can be found, i.e. NiBiTor, (GeForce 6/7 series are well regarded in this aspect). It is not necessary to acquire a firmware file from a better model video card (although it should be said that the card in which firmware is to be used should be compatible, i.e. the same model base, design and/or manufacture process, revisions etc.). For example, video cards with 3D accelerators (the vast majority of today's market) have two voltage and clock rate settings - one for 2D and one for 3D - but were designed to operate with three voltage stages, the third being somewhere in the middle of the aforementioned two, serving as a fallback when the card overheats or as a middle-stage when going from 2D to 3D operation mode. Therefore, it could be wise to set this middle-stage prior to "serious" overclocking, specifically because of this fallback ability - the card can drop down to this clock rate, reducing by a few (or sometimes a few dozen, depending on the setting) percent of its efficiency and cool down, without dropping out of 3D mode (and afterwords return to the desired high performance clock and voltage settings).

Some cards also have certain abilities not directly connected with overclocking. For example, NVIDIA's GeForce 6600GT (AGP flavor) features a temperature monitor (used internally by the card), which is invisible to the user in the 'vanilla' version of the card's BIOS. Modifying the BIOS can allow a 'Temperature' tab to become visible in the card driver's advanced menu.

Unlocking refers to enabling extra pipelines and/or pixel shaders. The 6800LE, the 6800GS and 6800 (AGP models only) and Radeon X800 Pro VIVO were some of the first cards to benefit from unlocking. While these models have either 8 or 12 pipes enabled, they share the same 16x6 GPU core as a 6800GT or Ultra, but may not have passed inspection when all their pipelines and shaders were unlocked. In more recent generations, both ATI and Nvidia have laser cut pipelines to prevent this practice.[citation needed].

It is important to remember that while pipeline unlocking sounds very promising, there is absolutely no way of determining if these 'unlocked' pipelines will operate without errors, or at all (this information is solely at the manufacturer's discretion). In a worst-case scenario, the card may not start up ever again, resulting in a 'dead' piece of equipment. It is possible to revert to the card's previous settings, but it involves manual firmware flashing using special tools and an identical but original firmware chip.

Selasa, 19 April 2011

Arsitektur dan fitur Porsesor Itanium / I3 /I5 /I7

Intel Itanium adalah sebuah prosesor 64-bit yang dikembangkan oleh Intel dan Hewlett-Packard, yang menggunakan arsitektur IA- 64 (Intel Architecture 64-bit). Pada saat dikembangkan, prosesor ini diberi nama kode prosesor Merced, dan dirilis pada tanggal 29 Mei 2001. Prosesor ini ditujukan untuk pasar high-end server yang membutuhkan kinerja tinggi dan bersifat mission-critical. Prosesor ini benar-benar baru (bukan penerus prosesor Intel x86), karena memang Intel mendesain prosesor ini dengan bantuan Hewlett-Packard.

Arsitektur yang digunakan adalah arsitektur gabungan dari dua prosesor dengan arsitektur RISC,
yakni HP PA/RISC dan Intel 860 yang kurang laku di pasaran.
Secara umum, fitur-fitur yang diusung oleh prosesor Intel Itanium adalah sebagai berikut:

1. Prosesor 64-bit murni. Meskipun demikian, ia dapat melakukan eksekusi terhadap

kode 32-bit Intel x86 melalui teknologi yang disebut dengan IA-32 Execution Layer

(IA-32 EL), meski kinerjanya kurang mengesankan.

2. Mampu mengakses memori fisik hingga 16 Terabyte (menggunakan 44-bit address bus)

3. Teknologi EPIC (Explicitly Parallel Instruction Computing), yang memungkinkan

prosesor Itanium dapat melakukan 20 operasi tiap siklusnya

4. Dua buah unit integer, dan dua buah unit memori yang dapat mengeksekusi hingga

empat instruksi tiap detak

5. Dua buah unit floating-point, yang dalam Itanium disebut sebagai FMAC (Floating-

Point Multiply Accumulate) yang mampu menangani hingga 82 operand, dan mampu

melakukan eksekusi dua operasi tiap detak.

6. Dua tambahan unit MMX yang masing-masing mampu melakukan dua operasi

floating-point. Selain itu unit ini juga mampu melakukan eksekusi terhadap delapan

operasi floating-point presisi tunggal yang dapat dieksekusi tiap siklus.

7. Memiliki 128 register integer, 128 register floating point, 8 register pencabangan

(branch register), serta 64 register predikasi (predication)

Nama Prosesor :Intel Itanium

Nama Kode : Prosesor Merced

Kisaran kecepatan :733 MHz, 800 MHz

Proses manufaktur :180 nanometer
Cache Level-1 32 Kilobyte (16 KB data cache ditambah 16 KB instruction cache yang
mampu mengirimkan dua instruction bundle [256-bit] tiap siklus). Jenis Cache Level-1 Set Associative, 4-way, dengan ukuran blok 32 byte Cache Level-2 96 Kilobyte, on die, yang berjalan pada kecepatan penuh. Jenis Cache Level-2 Set Associative, 6-way, dengan ukuran blok 64 byte Cache Level-3 2048 KB atau 4096 KB on-cartridge, yang berjalan pada kecepatan penuh
Jenis cache Level-3 Set Associative, 4-way. Berkomunikasi dengan cache level-2 dengan lebar bandwidth 128-bit, sehingga menghasilkan throughput maksimal 12,8 Gigabyte/s.
Kecepatan Front Side Bus :266 MHz

Lebar jalur memory bus :64 bit

Maksimum bandwidth memori :2128 MByte/s
Jumlah transistor 25 juta (inti prosesor), ditambah 150 juta transistor (untuk 2048 KB
cache Level-3) atau 300 juta (untuk 4096 KB cache Level-3). Jenis Package prosesor Cartridge (sama seperti Pentium II/III), yang dinamakan dengan Pin Array Cartridge (PAC). Cartridge yang digunakan mencakup cache Level-3.

Intekoneksi ke motherboard Socket 418 pin (bukan slot, seperti Pentium II/III).
Berat package prosesor kira-kira 170 gram.

Intel Itanium Architecture or IA-64:

Intel ® 64 Architecture

Intel 64 arsitektur, disebut x64, bukan singkatan sebenarnya (phsysical) memori dan virtual (virtual) memori menangani lebih dari 4GB keduanya (alamat memori) karena kinerja diperbaiki dibandingkan x86 (kinerja) adalah bahwa hal itu memberikan. diterapkan x64 server, workstation, desktop, dan platform mobile, ditambah dengan perangkat lunak 64-bit untuk mengkonfigurasi lingkungan komputasi 64-bit, untuk Intel x64 (desktop) yang juga mendukung fitur-fitur berikut.

1. 64-bit flat address space virtual (ruang alamat 64-bit flat virtual tersedia): Saat ini (tahun 2009), Microsoft NT berbasis 32-bit sistem operasi, manajemen memori 4GB virtual, dan ruang alamat 32-bit flat virtual (32 rata -. sedikit ruang alamat virtual) tersedia untuk aplikasi. Aplikasi, dan proses setiap 2GB ruang alamat virtual dari 4GB bahwa aplikasi (mode pengguna), yang 2GB sisa adalah Windows (kernel mode) untuk menggunakan. Aplikasi yang menggunakan ruang alamat dari 2GB dengan cara langsung dengan ‘langsung diakses’ yang disebut. Namun, dalam 64-bit 2 ^ 32 = 4GB, sehingga tidak dapat melebihi lebih dari ‘langsung diakses untuk peningkatan yang signifikan dalam jangkauan, sehingga lebih ruang alamat memori sebagai hasilnya aplikasi dapat menggunakan satu akan. Ini mendukung ruang alamat ’64-bit virtual datar ‘tidak lebih langsung dapat diakses ruang neuleonam karena aplikasi 32-bit yang dapat digunakan sebagai alat untuk memperluas jangkauan alamat.

2. 64-bit pointer (mendukung pointer 64-bit):. Sebagai contoh, jika bahasa pemrograman C, sebuah variabel integer bernama (integer) 10 Jika Anda memiliki, int a = 10 used’ve pernah dilakukan. Apabila seorang memiliki alamat pada memori, alamat dari 10 berarti bahwa data disimpan. 만일 alamat memori dari variabel Anda ingin mendapatkan pointer ke arti dari * simbol dapat digunakan sebagai * pemrograman, C, jika Anda menggunakan pointer ini ketika jumlah coding lebih cepat, sementara masih menerapkan algoritma, untuk beroperasi di Karena itu adalah mungkin bagi seorang programmer terampil untuk merendahkan. 64-bit mendukung pointer alamat 64-bit yang satu ini menunjukkan bahwa ada pointer untuk digunakan. 64-bit dan 64-bit point akan saya akan berbicara mengenai dukungan untuk diberikan.

3. 64-bit register tujuan luas umum (tujuan umum 64-bit register yang didukung):. CPU mendapatkan data dari memori Bila Anda melakukan perhitungan cepat register khusus sendiri untuk menggunakan kecepatan tinggi, khusus dalam register tujuan umum digunakan selain untuk umum adalah register hadir. register 64-bit tujuan umum yang mendukungnya daripada register ini khusus umum 64-bit juga berarti bahwa Anda harus menggunakan.

4. 64-bit integer dukungan (64-bit dukungan integer): Sebuah integer bahwa kita mengetahui bilangan bulat negatif, 0, dibagi menjadi tiga jenis bilangan bulat positif. 32-bit adalah 2 ^ 32 = 4294967296 (sekitar 4,3 miliar) dan dua tingkat dapat dinyatakan, untuk bilangan bulat positif, termasuk 0, dapat diwakili oleh 4294967295. Untuk menyertakan bilangan bulat negatif ada pula untuk mengubah kisaran angka termasuk 32-bit hanya tersedia 4294967296. 64-bit sekitar 1 hingga 2 ^ 64 = 18446744073709551616 18450000000000000000 (京) yang dapat digunakan sebuah integer (kisaran) berarti.

5. Sampai satu terabyte (TB) ruang platform alamat (1 alamat dukungan terabyte platform): 18446744073709551616 2 ^ 64 bit. Ini (byte) byte unit dapat melakukan perhitungan adalah sebagai berikut.

  • 1024 ^ 0 = 1 byte (byte)
  • 1024 ^ 1 = 1 kilogbyte (kilobyte)
  • 1024 ^ 2 = 1 megabyte (MB)
  • 1.024 ^ 3 = 1 gigabyte (GB)
  • 1024 ^ 4 = 1 terabyte (TB)
  • 1024 ^ 5 = 1 petabyte (petabyte)
  • 1024 ^ 6 = 1 Exabyte (exabyte)

18446744073709551616 / 1024 ^ 6 = 16 exabyte (Exabyte), 64bit adalah batas numerik 16 eksabayiteuga. jumlah besar, tetapi, tidak tahu mengapa hal itu tertutup. Jika Anda menghitungnya, sehingga gigabyte memori, 1GB memori 17179869184 anjing adalah setara dengan kapasitas memori yang besar. Tapi 64-bit saat ini teknologi yang Anda sebenarnya tidak menggunakan 16 eksabayiteukkajineun. Untuk nilai teoritis yang 64 bit 1 TB, 1024 GB address space menjadi yang dapat Anda akses.

Prosesor Itanium didukung oleh beberapa sistem operasi, di antaranya adalah Microsoft Windows XP 64-bit Edition, dan Windows 2000 Advanced Server Limited Edition 2002 dari Microsoft Corporation; GNU/Linux (yang disuplai oleh beberapa pembuat distro mayor, semacam Red Hat, SuSE, Caldera, Debian, dan Turbo Linux), serta dua versi UNIX, yakni Hewlett-Packard HP/UX dan IBM AIX. Jenis sistem operasi yang mendukung IA-32 EL adalah Windows Server 2003 Enterprise Edition, Windows Server 2003 Data Center Edition, Windows XP 64-bit Edition, serta beberapa distribusi GNU/Linux yang baru yang menggunakan versi kernel 2.6.x.


Intel Core I3

Core i3 530 berjalan pada 2.93GHz dan tidak memiliki fitur turbo mode. Core i3 530 akan berjalan pada 1.33GHz pada frekuensi terendah, dan tidak lebih cepat daripada 2.93GHz pada full load. Fitur turbo boost yang hilang merupakan pengorbanan, karena 530 masih memiliki 4MB L3 cache dibagi antara kedua core.

Intel Core I5

Uncore i5 berjalan pada clock 2.13GHz, turun dari 2.40GHz. Kinerja yang harus terluka sedikit dibandingkan dengan simulasi Intel Core i3. Selain Turbo Boost hal lain yang Anda korbankan adalah AES acceleration.Westmere’s AES (AES-NI) menonaktifkan-nya pada semua jenis Intel Core i3. Harus ada beberapa alasan bagi pengguna untuk memilih i5 sebagai gantinya.


Intel Core I7

Core i7 adalah generasi paling baru dari processor Intel. Intel Core i7 disebut-sebut sebagai processor paling cepat didunia. Core i7 memiliki arsitektur 64-bit (begitu juga dengan pendahulunya Core 2 Duo, Core i3, dan Core i5). Core i7 memiliki L2 Cache mulai dari 4 MB sampai 12 MB dengan daya 18 Watt sampai 130 Watt. Core i7 memiliki soket tipe LGA 1156 dan LGA 1366.

 
Design by Enggar Ranu Hariawan | Best view with Mozilla Firefox 5.0.x version or above, at 1024x768 pixels resolution.

This site using Adobe Flash Player v9.0 or above and `Javascripts Enabled' on your browser

..